-
Notifications
You must be signed in to change notification settings - Fork 137
[DO NOT MERGE] gemini-sdk-batch-support #1001
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: 11-28-batch-apis
Are you sure you want to change the base?
Conversation
|
Warning This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
This stack of pull requests is managed by Graphite. Learn more about stacking. |
📝 WalkthroughSummary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings. WalkthroughAdds an end-to-end BatchDelete capability: new BatchDelete request/response schemas, core Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant HTTP_Router as "HTTP Router (GenAI)"
participant SessionStore as "Resumable Session Store"
participant Bifrost_Core as "Bifrost Core"
participant Provider as "Provider (e.g. Gemini)"
Client->>HTTP_Router: POST /genai/batch.delete (SDK or HTTP)
HTTP_Router->>SessionStore: check/init resumable session (file upload)
alt resumable init (phase 1)
SessionStore-->>HTTP_Router: session created / ErrResumableUploadInit
HTTP_Router-->>Client: 200 (init ack)
else proceed to delete
HTTP_Router->>Bifrost_Core: BatchDeleteRequest(ctx, req)
Bifrost_Core->>Provider: Provider.BatchDelete(ctx, key, req)
Provider-->>Bifrost_Core: BifrostBatchDeleteResponse / error
Bifrost_Core-->>HTTP_Router: response or wrapped error
HTTP_Router->>Client: formatted SDK/HTTP response
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~45 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (24)
🚧 Files skipped from review as they are similar to previous changes (10)
🧰 Additional context used📓 Path-based instructions (1)**⚙️ CodeRabbit configuration file
Files:
🧠 Learnings (1)📚 Learning: 2025-12-09T17:07:42.007ZApplied to files:
🧬 Code graph analysis (12)core/providers/parasail/batch.go (3)
core/providers/sgl/batch.go (3)
core/bifrost.go (3)
core/providers/anthropic/batch.go (4)
core/providers/elevenlabs/batch.go (3)
core/providers/openrouter/batch.go (4)
transports/bifrost-http/integrations/router.go (4)
core/providers/gemini/batch.go (4)
core/providers/perplexity/batch.go (3)
core/providers/azure/batch.go (3)
core/providers/gemini/files.go (1)
core/providers/gemini/types.go (1)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
🔇 Additional comments (17)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🧹 Nitpick comments (7)
transports/bifrost-http/integrations/utils.go (1)
194-194: Redundant Content-Length header setting.The
fasthttplibrary automatically sets theContent-Lengthheader whenSetBody()is called. Explicitly setting it here is unnecessary.Apply this diff to remove the redundant header:
- ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody))) ctx.SetBody(responseBody)core/providers/gemini/batch.go (3)
870-896: Consider defensive handling for batch stats calculations.The calculations for
PendingRequestCountandSuccessfulRequestCount(lines 889-890) could produce negative values if the underlying data is inconsistent (e.g.,Completed > TotalorFailed > Completed).BatchStats: &GeminiBatchStats{ RequestCount: resp.RequestCounts.Total, - PendingRequestCount: resp.RequestCounts.Total - resp.RequestCounts.Completed, - SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed, + PendingRequestCount: max(0, resp.RequestCounts.Total - resp.RequestCounts.Completed), + SuccessfulRequestCount: max(0, resp.RequestCounts.Completed - resp.RequestCounts.Failed), },
898-933: Same defensive handling consideration applies here.The
PendingRequestCountandSuccessfulRequestCountcalculations at lines 916-917 have the same potential for negative values as noted above.
935-972: Same defensive handling consideration for list response conversion.Lines 955-956 have the same calculation pattern that could produce negative values.
transports/bifrost-http/integrations/genai.go (3)
52-67: Consider adding graceful shutdown for the cleanup goroutine.The goroutine started in
init()runs forever with no way to stop it. While this works for long-running services, it could cause issues in tests or if the package is used in a context where cleanup is expected. Consider exposing a shutdown mechanism or using context cancellation.
217-237: Consider extracting duplicated resumable upload route configuration.The POST and PUT routes for
/upload/v1beta/files/resumable/{session_id}share nearly identical configuration. Consider extracting the common logic to reduce duplication.// Example refactor: func createResumableUploadRouteConfig(method string, pathPrefix string) RouteConfig { return RouteConfig{ Type: RouteConfigTypeGenAI, Path: pathPrefix + "/upload/v1beta/files/resumable/{session_id}", Method: method, // ... shared configuration } }Also applies to: 273-293
746-778: Minor:extractGeminiBatchIDFromPathCancelduplicates logic fromextractGeminiBatchIDFromPath.The only difference is the
strings.TrimSuffix(batchIDStr, ":cancel")line. Consider consolidating into a single function with a parameter, or rely on the router's path matching to exclude the:cancelsuffix.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (25)
core/bifrost.go(1 hunks)core/providers/anthropic/batch.go(1 hunks)core/providers/azure/batch.go(1 hunks)core/providers/bedrock/batch.go(1 hunks)core/providers/cerebras/batch.go(1 hunks)core/providers/cohere/batch.go(1 hunks)core/providers/elevenlabs/batch.go(1 hunks)core/providers/gemini/batch.go(1 hunks)core/providers/gemini/files.go(1 hunks)core/providers/gemini/types.go(1 hunks)core/providers/groq/batch.go(1 hunks)core/providers/mistral/batch.go(1 hunks)core/providers/ollama/batch.go(1 hunks)core/providers/openai/batch.go(1 hunks)core/providers/openrouter/batch.go(1 hunks)core/providers/parasail/batch.go(1 hunks)core/providers/perplexity/batch.go(1 hunks)core/providers/sgl/batch.go(1 hunks)core/providers/vertex/batch.go(1 hunks)core/schemas/batch.go(1 hunks)core/schemas/bifrost.go(1 hunks)core/schemas/provider.go(3 hunks)transports/bifrost-http/integrations/genai.go(14 hunks)transports/bifrost-http/integrations/router.go(7 hunks)transports/bifrost-http/integrations/utils.go(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/providers/openrouter/batch.gocore/schemas/bifrost.gocore/providers/anthropic/batch.gocore/providers/mistral/batch.gocore/providers/perplexity/batch.gocore/providers/openai/batch.gocore/providers/cohere/batch.gocore/providers/cerebras/batch.gocore/providers/ollama/batch.gocore/schemas/batch.gocore/providers/groq/batch.gocore/bifrost.gocore/providers/sgl/batch.gocore/providers/elevenlabs/batch.gocore/providers/gemini/batch.gotransports/bifrost-http/integrations/utils.gocore/providers/azure/batch.gocore/providers/parasail/batch.gocore/providers/vertex/batch.gotransports/bifrost-http/integrations/router.gocore/schemas/provider.gocore/providers/bedrock/batch.gocore/providers/gemini/files.gotransports/bifrost-http/integrations/genai.gocore/providers/gemini/types.go
🧬 Code graph analysis (17)
core/providers/openrouter/batch.go (5)
core/providers/openrouter/openrouter.go (1)
OpenRouterProvider(18-23)core/schemas/account.go (1)
Key(8-17)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/schemas/bifrost.go (1)
ui/lib/types/config.ts (1)
RequestType(122-134)
core/providers/anthropic/batch.go (4)
core/providers/anthropic/anthropic.go (1)
AnthropicProvider(21-28)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/mistral/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/openai/batch.go (4)
core/providers/openai/openai.go (1)
OpenAIProvider(24-30)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/cohere/batch.go (4)
core/providers/cohere/cohere.go (1)
CohereProvider(67-73)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/cerebras/batch.go (4)
core/providers/cerebras/cerebras.go (1)
CerebrasProvider(16-21)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/ollama/batch.go (4)
core/providers/ollama/ollama.go (1)
OllamaProvider(18-23)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/schemas/batch.go (2)
core/schemas/provider.go (1)
Provider(313-362)core/schemas/bifrost.go (2)
ModelProvider(32-32)BifrostResponseExtraFields(295-304)
core/providers/groq/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/sgl/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/azure/batch.go (5)
core/providers/azure/azure.go (1)
AzureProvider(23-28)core/schemas/account.go (1)
Key(8-17)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/parasail/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/vertex/batch.go (4)
core/providers/vertex/vertex.go (1)
VertexProvider(57-62)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/schemas/provider.go (2)
core/schemas/bifrost.go (1)
BatchDeleteRequest(103-103)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)
core/providers/bedrock/batch.go (4)
core/providers/bedrock/bedrock.go (1)
BedrockProvider(29-35)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/gemini/files.go (1)
core/schemas/files.go (1)
BifrostFileUploadResponse(89-105)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
🔇 Additional comments (35)
core/providers/sgl/batch.go (1)
35-38: LGTM!The
BatchDeletemethod correctly follows the established pattern for unsupported operations in the SGL provider, consistent with other batch methods in this file.core/providers/groq/batch.go (1)
35-38: LGTM!The
BatchDeletemethod correctly follows the established pattern for unsupported operations in the Groq provider, consistent with other batch methods in this file.core/providers/openrouter/batch.go (1)
35-38: LGTM!The
BatchDeletemethod correctly follows the established pattern for unsupported operations in the OpenRouter provider, consistent with other batch methods in this file.core/providers/vertex/batch.go (1)
35-38: LGTM!The
BatchDeletemethod correctly follows the established pattern for unsupported operations in the Vertex AI provider, consistent with other batch methods in this file.core/providers/azure/batch.go (1)
438-441: LGTM!The
BatchDeletemethod correctly follows the established pattern for unsupported operations in the Azure provider, consistent with other batch methods across all providers.core/providers/anthropic/batch.go (1)
783-786: LGTM!The
BatchDeletemethod correctly follows the established pattern for unsupported operations in the Anthropic provider, consistent with other batch methods across all providers.core/providers/mistral/batch.go (1)
35-38: LGTM!The
BatchDeletemethod correctly follows the established pattern for unsupported operations in the Mistral provider, consistent with other batch methods in this file.core/providers/elevenlabs/batch.go (1)
35-38: LGTM! Consistent implementation.The BatchDelete method correctly follows the established pattern for unsupported batch operations in the Elevenlabs provider, consistent with the other Batch* methods in this file.
core/providers/cohere/batch.go (1)
35-38: LGTM! Proper unsupported operation handling.The BatchDelete implementation correctly returns an unsupported operation error, maintaining consistency with other batch operations in the Cohere provider.
core/schemas/bifrost.go (1)
103-103: LGTM! RequestType constant properly added.The BatchDeleteRequest constant is correctly placed within the batch operation constants group and follows the established naming convention.
core/providers/bedrock/batch.go (1)
931-934: LGTM! Consistent with Bedrock batch API pattern.The BatchDelete method correctly returns an unsupported operation error, matching the pattern of other unsupported batch operations in the Bedrock provider.
core/providers/cerebras/batch.go (1)
35-38: LGTM! Follows established pattern.The BatchDelete implementation is correct and consistent with other unsupported batch operations in the Cerebras provider.
core/providers/openai/batch.go (1)
597-600: LGTM! Proper unsupported operation.The BatchDelete method correctly signals that this operation is not supported by OpenAI provider, following the same pattern as other providers.
core/providers/ollama/batch.go (1)
35-38: LGTM! Completes the provider interface.The BatchDelete implementation correctly returns an unsupported operation error, maintaining consistency with the Ollama provider's other batch operations.
core/schemas/provider.go (3)
182-182: LGTM! AllowedRequests field added correctly.The BatchDelete field is properly added to the AllowedRequests struct, maintaining consistency with other batch operation flags.
231-232: LGTM! Permission check implemented correctly.The BatchDeleteRequest case is properly handled in IsOperationAllowed, returning the ar.BatchDelete flag value as expected.
350-351: LGTM! Provider interface extended properly.The BatchDelete method is correctly added to the Provider interface with the appropriate signature, matching the pattern of other batch operations. All providers in this PR implement this interface method.
core/providers/perplexity/batch.go (1)
35-39: LGTM!The
BatchDeleteimplementation follows the established pattern for unsupported operations in this provider, consistent with the otherBatch*methods above.core/providers/parasail/batch.go (1)
35-39: LGTM!The
BatchDeleteimplementation is consistent with the existing unsupported batch operations pattern.core/bifrost.go (1)
1133-1209: LGTM!The
BatchDeleteRequestmethod follows the established pattern from other batch operations (BatchRetrieveRequest,BatchCancelRequest,BatchResultsRequest) with proper validation, provider resolution, key selection, and retry logic.core/schemas/batch.go (1)
265-281: LGTM!The
BifrostBatchDeleteRequestandBifrostBatchDeleteResponsetypes follow the established patterns from other batch operations. The response structure withID,Object, andDeletedfields aligns with standard delete response semantics.core/providers/gemini/batch.go (3)
846-868: LGTM!The
ToGeminiJobStatefunction provides a clear mapping from Bifrost batch statuses to Gemini SDK job states, with sensible defaults for edge cases.
974-984: LGTM!Simple and correct conversion for cancel response.
986-1043: LGTM!The
BatchDeleteimplementation correctly:
- Validates operation is allowed
- Validates
batch_idis required- Handles both prefixed (
batches/...) and non-prefixed batch IDs- Uses HTTP DELETE method
- Accepts both 200 OK and 204 No Content as success responses
- Returns properly structured response with latency metadata
The implementation is consistent with the patterns established by
BatchCancelandBatchRetrievemethods.transports/bifrost-http/integrations/router.go (3)
88-88: LGTM - Consistent extension of BatchRequest struct.The addition of
DeleteRequestfield follows the established pattern for other batch request types.
159-162: LGTM - BatchDeleteResponseConverter type definition.Follows the same pattern as other batch response converters in this file.
820-841: LGTM - BatchDeleteRequest handling follows established pattern.The implementation is consistent with other batch request handlers (create, list, retrieve, cancel, results), including proper nil checks, error handling, post-callback invocation, and optional response conversion.
core/providers/gemini/files.go (3)
466-485: LGTM - Well-structured wrapper type and conversion function.The
GeminiFileUploadResponseWrapperprovides proper structure for the Gemini API response format, andToGeminiFileUploadResponsecorrectly maps Bifrost fields to Gemini fields. The hardcoded"application/octet-stream"MIME type is a reasonable default when actual MIME type is not available in the Bifrost response.
488-519: LGTM - Consistent conversion implementations.
ToGeminiFileListResponseandToGeminiFileRetrieveResponsefollow the same patterns asToGeminiFileUploadResponseand correctly handle all field mappings including optional fields likeExpiresAt.
522-549: LGTM - Clean helper functions.
toGeminiFileState: Properly maps Bifrost status to Gemini state strings with sensible default handlingformatGeminiTimestamp: Correctly returns empty string for zero values and uses RFC3339 formatsafeDerefInt64: Safe nil pointer handlingtransports/bifrost-http/integrations/genai.go (1)
438-661: LGTM - Batch route configurations follow established patterns.The
CreateGenAIBatchRouteConfigsfunction properly implements batch operations (create, list, retrieve, cancel, delete) with:
- Consistent provider extraction from headers
- Proper request/response conversion
- Appropriate error handling
The implementation aligns with the existing file route configurations in this file.
core/providers/gemini/types.go (4)
1578-1589: LGTM - SDK job state constants.The new
GeminiJobState*constants properly define the Google GenAI SDK job states, which are distinct from the internal batch states defined earlier in the file.
1594-1651: LGTM - SDK batch request/response types.The new SDK-aligned types (
GeminiBatchCreateRequestSDK,GeminiBatchJobResponseSDK, etc.) are well-structured:
Src interface{}correctly allows both string (file reference) and array (inline requests)- Types mirror the Google GenAI Python SDK expectations
- Proper JSON tags for serialization
1663-1678: LGTM - File request types are properly defined.The
GeminiFileListRequest,GeminiFileRetrieveRequest, andGeminiFileDeleteRequesttypes provide clean structures for file operations with appropriate optional fields.
1656-1662: No breaking changes from removingProviderfield.The
GeminiFileUploadRequeststruct removal of theProviderfield does not break any existing callers. No code in the codebase accesses this field directly. The provider information is obtained from the HTTP context (bifrostContextKeyProvider) and maintained separately in upload sessions, making the removal safe.
| func generateSessionID() string { | ||
| bytes := make([]byte, 16) | ||
| rand.Read(bytes) | ||
| return hex.EncodeToString(bytes) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle error from crypto/rand.Read.
While crypto/rand.Read rarely fails, ignoring the error could lead to predictable session IDs if the random source is unavailable, which could be a security concern for session hijacking.
func generateSessionID() string {
bytes := make([]byte, 16)
- rand.Read(bytes)
+ if _, err := rand.Read(bytes); err != nil {
+ // Fallback to a less ideal but still unique ID
+ // This should rarely happen in practice
+ return fmt.Sprintf("%d-%d", time.Now().UnixNano(), time.Now().UnixNano())
+ }
return hex.EncodeToString(bytes)
}🤖 Prompt for AI Agents
transports/bifrost-http/integrations/genai.go lines 45-49: the call to
crypto/rand.Read ignores its error which can lead to weak/predictable session
IDs if the random source fails; change generateSessionID to return (string,
error) (or propagate an error) and check the error from rand.Read (or
io.ReadFull) before hex-encoding the bytes, returning an error if reading
failed; update all callers to handle the new error return (or if you prefer a
panicking behavior for unrecoverable failure, explicitly handle the error with a
clear panic/log.Fatalf and keep the string return).
| fmt.Printf("[DEBUG] FileUploadResponseConverter (phase2 POST): using raw response\n") | ||
| return resp.ExtraFields.RawResponse, nil | ||
| } | ||
| result := gemini.ToGeminiFileUploadResponse(resp) | ||
| // If displayName is empty, use the original filename from context | ||
| if result.File.DisplayName == "" { | ||
| if originalFilename := (*ctx).Value(contextKeyOriginalFilename{}); originalFilename != nil { | ||
| if filename, ok := originalFilename.(string); ok && filename != "" { | ||
| result.File.DisplayName = filename | ||
| fmt.Printf("[DEBUG] FileUploadResponseConverter (phase2 POST): set displayName from context=%s\n", filename) | ||
| } | ||
| } | ||
| } | ||
| fmt.Printf("[DEBUG] FileUploadResponseConverter (phase2 POST): converted response=%+v\n", result) | ||
| return result, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove all debug fmt.Printf statements.
There are numerous [DEBUG] print statements throughout this file that should be removed before merging. These include:
- Lines 240, 249, 253-254: FileUploadResponseConverter debug logs
- Lines 792-793, 813-814, 818-819, 824, 835-836, 841, 854-855, 860-861: parseGeminiResumableUpload debug logs
- Lines 1001, 1013, 1022, 1034, 1046, 1062-1063: parseGeminiResumableUploadPhase2 debug logs
- Lines 1072-1077: setResumableUploadFinalStatus debug logs
- Lines 1108, 1125, 1129, 1147-1148: extractGeminiFileUploadParams debug logs
If debug logging is needed for production troubleshooting, use a proper logger with configurable log levels instead.
Also applies to: 792-793, 813-814, 818-819, 824-824, 835-836, 841-841, 854-855, 860-861, 1001-1001, 1013-1013, 1022-1022, 1034-1034, 1046-1046, 1062-1063, 1072-1077, 1108-1108, 1125-1125, 1129-1129, 1147-1148
🤖 Prompt for AI Agents
transports/bifrost-http/integrations/genai.go lines ~240-254 (and also at the
other locations listed): remove all debug fmt.Printf calls (the “[DEBUG]” print
statements) present around the specified lines; replace them with calls to the
package's logger (e.g., logger.Debugf or equivalent) if persistent debug logging
is required, or simply delete the lines if no logging is needed, and clean up
any unused fmt imports left behind.
| } | ||
| } else { | ||
| // This is file content | ||
| uploadReq.File = []byte(content) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Check the context around line 929 in genai.go
cat -n transports/bifrost-http/integrations/genai.go | sed -n '920,940p'Repository: maximhq/bifrost
Length of output: 748
🏁 Script executed:
# Search for multipart/related usage and binary file handling
rg -n "multipart/related" --type go -C5Repository: maximhq/bifrost
Length of output: 3198
🏁 Script executed:
# Check how 'content' variable is assigned before line 929
rg -n "content.*=" transports/bifrost-http/integrations/genai.go | head -20Repository: maximhq/bifrost
Length of output: 757
🏁 Script executed:
# Get the full parseGeminiMultipartRelated function
cat -n transports/bifrost-http/integrations/genai.go | sed -n '867,934p'Repository: maximhq/bifrost
Length of output: 2542
🏁 Script executed:
# Search for tests related to binary uploads or multipart/related
fd -e go --type f | xargs rg -l "parseGeminiMultipartRelated|binary.*upload|multipart.*test" 2>/dev/nullRepository: maximhq/bifrost
Length of output: 104
🏁 Script executed:
# Check if there are any existing issues or comments about binary data handling
rg -n "binary|null.*byte|UTF-8|encoding" transports/bifrost-http/integrations/genai.goRepository: maximhq/bifrost
Length of output: 76
Critical: Binary data corruption in multipart/related parsing.
The file content is corrupted during parsing because the request body is converted to a string at line 887 (strings.Split(string(body), delimiter)), then extracted as a string, and finally converted back to bytes at line 929. Go strings are UTF-8 encoded; any binary data containing null bytes or invalid UTF-8 sequences will be lost or replaced with the replacement character (U+FFFD). This breaks all binary file uploads through the multipart/related endpoint.
Use bytes.Split() instead of strings.Split() to preserve binary data integrity throughout the parsing process.
🤖 Prompt for AI Agents
transports/bifrost-http/integrations/genai.go around line 887-929: the
multipart/related parsing currently converts the request body to a string
(strings.Split(string(body), delimiter)) and later reconstructs the file with
[]byte(content), corrupting binary data; change the parsing to operate on the
raw []byte using bytes.Split (or bytes.Index/bytes.SplitN) so you never convert
body to string, extract the part slice directly as []byte, and assign
uploadReq.File = partSlice (or a copy) instead of converting through string;
ensure any boundary/index operations use byte delimiters and adjust variable
types accordingly to preserve binary integrity.
| fmt.Printf("[DEBUG] router: calling FileUploadRequest for provider=%s, purpose=%s, filename=%s\n", fileReq.UploadRequest.Provider, fileReq.UploadRequest.Purpose, fileReq.UploadRequest.Filename) | ||
| fileResponse, bifrostErr := g.client.FileUploadRequest(requestCtx, fileReq.UploadRequest) | ||
| if bifrostErr != nil { | ||
| errMsg := "unknown error" | ||
| if bifrostErr.Error != nil { | ||
| errMsg = bifrostErr.Error.Message | ||
| } | ||
| fmt.Printf("[DEBUG] router: FileUploadRequest error: %s (provider=%s)\n", errMsg, fileReq.UploadRequest.Provider) | ||
| g.sendError(ctx, bifrostCtx, config.ErrorConverter, bifrostErr) | ||
| return | ||
| } | ||
| fmt.Printf("[DEBUG] router: FileUploadRequest success, response ID=%s\n", fileResponse.ID) | ||
| if config.PostCallback != nil { | ||
| fmt.Printf("[DEBUG] router: calling PostCallback\n") | ||
| if err := config.PostCallback(ctx, req, fileResponse); err != nil { | ||
| fmt.Printf("[DEBUG] router: PostCallback error: %v\n", err) | ||
| g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute post-request callback")) | ||
| return | ||
| } | ||
| fmt.Printf("[DEBUG] router: PostCallback success\n") | ||
| } | ||
| if config.FileUploadResponseConverter != nil { | ||
| fmt.Printf("[DEBUG] router: calling FileUploadResponseConverter\n") | ||
| response, err = config.FileUploadResponseConverter(bifrostCtx, fileResponse) | ||
| fmt.Printf("[DEBUG] router: FileUploadResponseConverter done, err=%v\n", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove debug fmt.Printf statements before merging.
These debug statements use fmt.Printf instead of the structured logger, and should be removed before production:
- Line 868:
fmt.Printf("[DEBUG] router: calling FileUploadRequest... - Line 875:
fmt.Printf("[DEBUG] router: FileUploadRequest error... - Line 879:
fmt.Printf("[DEBUG] router: FileUploadRequest success... - Line 881:
fmt.Printf("[DEBUG] router: calling PostCallback\n" - Line 883:
fmt.Printf("[DEBUG] router: PostCallback error... - Line 887:
fmt.Printf("[DEBUG] router: PostCallback success\n" - Line 890:
fmt.Printf("[DEBUG] router: calling FileUploadResponseConverter\n" - Line 892:
fmt.Printf("[DEBUG] router: FileUploadResponseConverter done...
If debug logging is needed for production, use g.logger.Debug(...) instead.
case schemas.FileUploadRequest:
if fileReq.UploadRequest == nil {
g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(nil, "Invalid file upload request"))
return
}
- fmt.Printf("[DEBUG] router: calling FileUploadRequest for provider=%s, purpose=%s, filename=%s\n", fileReq.UploadRequest.Provider, fileReq.UploadRequest.Purpose, fileReq.UploadRequest.Filename)
fileResponse, bifrostErr := g.client.FileUploadRequest(requestCtx, fileReq.UploadRequest)
if bifrostErr != nil {
- errMsg := "unknown error"
- if bifrostErr.Error != nil {
- errMsg = bifrostErr.Error.Message
- }
- fmt.Printf("[DEBUG] router: FileUploadRequest error: %s (provider=%s)\n", errMsg, fileReq.UploadRequest.Provider)
g.sendError(ctx, bifrostCtx, config.ErrorConverter, bifrostErr)
return
}
- fmt.Printf("[DEBUG] router: FileUploadRequest success, response ID=%s\n", fileResponse.ID)
if config.PostCallback != nil {
- fmt.Printf("[DEBUG] router: calling PostCallback\n")
if err := config.PostCallback(ctx, req, fileResponse); err != nil {
- fmt.Printf("[DEBUG] router: PostCallback error: %v\n", err)
g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute post-request callback"))
return
}
- fmt.Printf("[DEBUG] router: PostCallback success\n")
}
if config.FileUploadResponseConverter != nil {
- fmt.Printf("[DEBUG] router: calling FileUploadResponseConverter\n")
response, err = config.FileUploadResponseConverter(bifrostCtx, fileResponse)
- fmt.Printf("[DEBUG] router: FileUploadResponseConverter done, err=%v\n", err)
} else {
response = fileResponse
}🤖 Prompt for AI Agents
transports/bifrost-http/integrations/router.go around lines 868 to 892: several
debug statements use fmt.Printf and must be removed or replaced with the
structured logger before merging; remove the listed fmt.Printf calls (lines 868,
875, 879, 881, 883, 887, 890, 892) and, if you still need debug visibility,
replace them with g.logger.Debug(...) calls that include the same contextual
fields (provider, purpose, filename, error details, response ID, etc.) and drop
any raw fmt.Printf usages so production code uses the structured logger.
| fmt.Printf("[DEBUG] router: file response conversion error: %v\n", err) | ||
| g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to convert file response")) | ||
| return | ||
| } | ||
|
|
||
| fmt.Printf("[DEBUG] router: sending file upload success response\n") | ||
| g.sendSuccess(ctx, bifrostCtx, config.ErrorConverter, response) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove remaining debug statements in file request handling.
Similar to the file upload path, these debug statements should be removed:
if err != nil {
- fmt.Printf("[DEBUG] router: file response conversion error: %v\n", err)
g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to convert file response"))
return
}
- fmt.Printf("[DEBUG] router: sending file upload success response\n")
g.sendSuccess(ctx, bifrostCtx, config.ErrorConverter, response)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fmt.Printf("[DEBUG] router: file response conversion error: %v\n", err) | |
| g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to convert file response")) | |
| return | |
| } | |
| fmt.Printf("[DEBUG] router: sending file upload success response\n") | |
| g.sendSuccess(ctx, bifrostCtx, config.ErrorConverter, response) | |
| g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to convert file response")) | |
| return | |
| } | |
| g.sendSuccess(ctx, bifrostCtx, config.ErrorConverter, response) |
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/router.go around lines 1001 to 1007
there are leftover fmt.Printf debug statements; remove the two fmt.Printf lines
(the "router: file response conversion error" and "router: sending file upload
success response") so the handler no longer prints raw debug output to stdout;
if you need to preserve diagnostics replace them with the project's structured
logger at the appropriate level (e.g., logger.Debugf) or drop them entirely.
|
|
||
| ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody))) | ||
| ctx.SetBody(responseBody) | ||
| fmt.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d, body=%s\n", len(responseBody), string(responseBody)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove debug log that exposes full response body.
Logging the full response body may expose sensitive information such as PII, API keys, authentication tokens, or other confidential data. Debug logs should not include complete response payloads in production environments.
Apply this diff to remove the sensitive debug log:
ctx.SetBody(responseBody)
- fmt.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d, body=%s\n", len(responseBody), string(responseBody))
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fmt.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d, body=%s\n", len(responseBody), string(responseBody)) | |
| ctx.SetBody(responseBody) | |
| } |
🤖 Prompt for AI Agents
transports/bifrost-http/integrations/utils.go around line 196: remove the debug
fmt.Printf that prints the full response body since it can expose sensitive
data; replace it with a non-sensitive log (e.g., log status and content length
only) or remove the statement entirely, ensuring no raw responseBody or
string(responseBody) is logged and that any remaining logging follows safe
practices (no PII or tokens).
6a51a9f to
54239ed
Compare
5a7e7a2 to
3bf5d39
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (5)
transports/bifrost-http/integrations/router.go (1)
868-893: Remove or replace debugfmt.Printflogging in file request handlersThe
fmt.Printfdebug statements in the file upload path and the trailing conversion/success logs are still present and bypass the structured logger:
- Around the upload path (e.g., provider/purpose/filename, error/success, callbacks, converters).
- Around file response conversion error and final “success response” lines.
These match the previously raised concern and should be removed or replaced with
g.logger.Debug(...)(or dropped entirely) before this ships.Also applies to: 1001-1006
transports/bifrost-http/integrations/genai.go (4)
45-49: Handle error fromcrypto/rand.Read.This issue was already flagged. Ignoring the error from
crypto/rand.Readcould lead to predictable session IDs if the random source is unavailable.
240-254: Remove debugfmt.Printfstatements.These debug statements were already flagged in a previous review. They should be removed or replaced with a proper logger with configurable log levels.
887-929: Binary data corruption in multipart/related parsing.This issue was already flagged. Converting binary body to string (line 887) and back to bytes (line 929) corrupts non-UTF-8 binary data. Use
bytes.Split()instead ofstrings.Split().
792-861: Remove all debugfmt.Printfstatements.This issue was already flagged in a previous review. The numerous debug statements throughout the resumable upload handling code should be removed or replaced with a proper structured logger.
Also applies to: 1072-1077, 1108-1148
🧹 Nitpick comments (8)
core/providers/gemini/types.go (1)
1594-1602: Document the flexible Src field type.The
Srcfield is defined asinterface{}, which can accept either a string (file reference) or an array of inline requests. Consider adding a comment documenting the expected types for clarity.Apply this diff to improve documentation:
type GeminiBatchCreateRequestSDK struct { Model string `json:"model,omitempty"` - // Src can be either: - // - A string like "files/display_name" for file-based input - // - An array of inline request objects + // Src can be either a string (e.g., "files/display_name") for file-based input + // or an array of GeminiBatchInlineRequest for inline requests Src interface{} `json:"src,omitempty"` }transports/bifrost-http/integrations/router.go (1)
421-427: Consider cancelling the Bifrost context when PreCallback short-circuits resumable uploadsIn the
ErrResumableUploadInitbranch you return early without invokingcancel(), unlike the other code paths that eventuallydefer cancel()or pass it into streaming. IfConvertToBifrostContextuses acontext.WithCancel, this risks a small but avoidable context/goroutine leak on resumable-init short-circuits.You could defensively call
cancel()before returning:- if config.PreCallback != nil { - if err := config.PreCallback(ctx, bifrostCtx, req); err != nil { - // Check if this is a resumable upload init that was already handled - if err == ErrResumableUploadInit { - // Response was already written by the PreCallback, just return - return - } + if config.PreCallback != nil { + if err := config.PreCallback(ctx, bifrostCtx, req); err != nil { + // Check if this is a resumable upload init that was already handled + if err == ErrResumableUploadInit { + // Response was already written by the PreCallback; clean up context and return + cancel() + return + }core/providers/gemini/files.go (1)
465-549: Gemini file conversion helpers are well-structured and consistentThe new wrapper type and
ToGeminiFile*converters cleanly translate Bifrost file responses into the Gemini JSON shapes, with sane defaults (e.g.,application/octet-streamMIME type, UTC timestamps, safe handling of optionalExpiresAt). This design should work well with the GenAI-facing routes.transports/bifrost-http/integrations/genai.go (5)
207-314: Extract shared logic between POST and PUT resumable upload routes.The POST (lines 209-261) and PUT (lines 265-314) routes for resumable uploads have nearly identical
FileRequestConverterandFileUploadResponseConverterimplementations. Consider extracting these to shared functions to reduce duplication and ease maintenance.Example extraction:
// Shared converter for resumable upload phase 2 func resumableUploadFileRequestConverter(ctx *context.Context, req interface{}) (*FileRequest, error) { if geminiReq, ok := req.(*gemini.GeminiFileUploadRequest); ok { provider := schemas.Gemini if p := (*ctx).Value(bifrostContextKeyProvider); p != nil { provider = p.(schemas.ModelProvider) } bifrostReq := &schemas.BifrostFileUploadRequest{ Provider: provider, File: geminiReq.File, Filename: geminiReq.Filename, Purpose: geminiReq.Purpose, } return &FileRequest{ Type: schemas.FileUploadRequest, UploadRequest: bifrostReq, }, nil } return nil, errors.New("invalid file upload request type") }
663-778: Extract common provider extraction logic.The provider extraction pattern (reading from header, defaulting to Gemini, storing in context) is duplicated across
extractGeminiBatchCreateParams,extractGeminiBatchListQueryParams,extractGeminiBatchIDFromPath, andextractGeminiBatchIDFromPathCancel. Consider extracting to a helper function.+// extractProviderFromHeader extracts and sets provider in context +func extractProviderFromHeader(ctx *fasthttp.RequestCtx, bifrostCtx *context.Context) schemas.ModelProvider { + provider := string(ctx.Request.Header.Peek("x-model-provider")) + if provider == "" { + provider = string(schemas.Gemini) + } + *bifrostCtx = context.WithValue(*bifrostCtx, bifrostContextKeyProvider, schemas.ModelProvider(provider)) + return schemas.ModelProvider(provider) +} func extractGeminiBatchCreateParams(ctx *fasthttp.RequestCtx, bifrostCtx *context.Context, req interface{}) error { - provider := string(ctx.Request.Header.Peek("x-model-provider")) - if provider == "" { - provider = string(schemas.Gemini) - } - *bifrostCtx = context.WithValue(*bifrostCtx, bifrostContextKeyProvider, schemas.ModelProvider(provider)) + extractProviderFromHeader(ctx, bifrostCtx) // ... rest of function
1145-1145: Hardcoded path prefix may drift from route definition.The resumable upload URL uses a hardcoded path
/genai/upload/v1beta/files/resumable/which should match the route defined at line 211 (pathPrefix + "/upload/v1beta/files/resumable/{session_id}"). If the route path changes, this URL construction will break. Consider deriving this path from a shared constant or the route configuration.
174-193: Consider extracting common FileRequestConverter/BatchRequestConverter pattern.Multiple route configurations have nearly identical converter implementations that:
- Type-assert the request
- Extract provider from context (defaulting to Gemini)
- Build a Bifrost request with the provider
This pattern is repeated for file upload, file list, file retrieve, file delete, and all batch operations. Extracting shared helper functions would reduce code duplication significantly.
Also applies to: 217-236, 273-292, 324-341, 365-380, 404-419, 450-490
511-531:BatchCreateRequestConverterfield name is misleading for list/retrieve/cancel/delete operations; consider renaming toBatchRequestConverteracross all integrations.The field is used polymorphically for all batch operation types (list, retrieve, cancel, delete) via the
BatchRequest.Typefield, but the name suggests it only handles create operations. While response converters use operation-specific names (BatchListResponseConverter,BatchRetrieveResponseConverter, etc.), the request converter remains generic. To improve clarity, consider renamingBatchCreateRequestConvertertoBatchRequestConverterinRouteConfigand all integration files where it's used (anthropic.go, openai.go, bedrock.go, genai.go).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (25)
core/bifrost.go(1 hunks)core/providers/anthropic/batch.go(1 hunks)core/providers/azure/batch.go(1 hunks)core/providers/bedrock/batch.go(1 hunks)core/providers/cerebras/batch.go(1 hunks)core/providers/cohere/batch.go(1 hunks)core/providers/elevenlabs/batch.go(1 hunks)core/providers/gemini/batch.go(1 hunks)core/providers/gemini/files.go(1 hunks)core/providers/gemini/types.go(1 hunks)core/providers/groq/batch.go(1 hunks)core/providers/mistral/batch.go(1 hunks)core/providers/ollama/batch.go(1 hunks)core/providers/openai/batch.go(1 hunks)core/providers/openrouter/batch.go(1 hunks)core/providers/parasail/batch.go(1 hunks)core/providers/perplexity/batch.go(1 hunks)core/providers/sgl/batch.go(1 hunks)core/providers/vertex/batch.go(1 hunks)core/schemas/batch.go(1 hunks)core/schemas/bifrost.go(1 hunks)core/schemas/provider.go(3 hunks)transports/bifrost-http/integrations/genai.go(14 hunks)transports/bifrost-http/integrations/router.go(7 hunks)transports/bifrost-http/integrations/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (10)
- transports/bifrost-http/integrations/utils.go
- core/schemas/bifrost.go
- core/providers/bedrock/batch.go
- core/providers/azure/batch.go
- core/providers/sgl/batch.go
- core/providers/vertex/batch.go
- core/schemas/batch.go
- core/providers/openrouter/batch.go
- core/providers/groq/batch.go
- core/providers/mistral/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/providers/parasail/batch.gocore/providers/openai/batch.gocore/schemas/provider.gocore/providers/perplexity/batch.gocore/providers/anthropic/batch.gocore/providers/gemini/batch.gocore/providers/gemini/files.gocore/providers/elevenlabs/batch.gotransports/bifrost-http/integrations/genai.gocore/providers/cohere/batch.gotransports/bifrost-http/integrations/router.gocore/bifrost.gocore/providers/gemini/types.gocore/providers/ollama/batch.gocore/providers/cerebras/batch.go
🧬 Code graph analysis (11)
core/providers/parasail/batch.go (2)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)
core/providers/openai/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/schemas/provider.go (2)
core/schemas/bifrost.go (1)
BatchDeleteRequest(103-103)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)
core/providers/perplexity/batch.go (4)
core/providers/perplexity/perplexity.go (1)
PerplexityProvider(19-24)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/anthropic/batch.go (4)
core/providers/anthropic/anthropic.go (1)
AnthropicProvider(21-28)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/gemini/batch.go (5)
core/schemas/batch.go (15)
BatchStatus(5-5)BatchStatusValidating(8-8)BatchStatusInProgress(10-10)BatchStatusFinalizing(11-11)BatchStatusCompleted(12-12)BatchStatusFailed(9-9)BatchStatusCancelling(14-14)BatchStatusCancelled(15-15)BatchStatusExpired(13-13)BifrostBatchCreateResponse(85-109)BifrostBatchRetrieveResponse(152-187)BifrostBatchListResponse(129-140)BifrostBatchCancelResponse(199-208)BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/providers/gemini/types.go (11)
GeminiJobStatePending(1582-1582)GeminiJobStateRunning(1583-1583)GeminiJobStateSucceeded(1584-1584)GeminiJobStateFailed(1585-1585)GeminiJobStateCancelling(1586-1586)GeminiJobStateCancelled(1587-1587)GeminiBatchJobResponseSDK(1618-1624)GeminiBatchMetadata(1499-1512)GeminiBatchStats(1433-1437)GeminiBatchDest(1533-1536)GeminiBatchListResponseSDK(1627-1630)core/schemas/bifrost.go (5)
BifrostError(364-373)Gemini(48-48)BatchDeleteRequest(103-103)BifrostResponseExtraFields(295-304)RequestType(83-83)core/providers/utils/utils.go (4)
CheckOperationAllowed(408-420)NewBifrostOperationError(449-460)SetExtraHeaders(178-208)MakeRequestWithContext(39-93)core/schemas/provider.go (1)
Provider(313-362)
core/providers/gemini/files.go (1)
core/schemas/files.go (1)
BifrostFileUploadResponse(89-105)
transports/bifrost-http/integrations/genai.go (3)
transports/bifrost-http/integrations/router.go (3)
EmbeddingResponseConverter(129-129)RequestParser(217-217)RouteConfig(270-301)core/providers/gemini/types.go (10)
GeminiFileUploadRequest(1656-1662)Type(779-779)GeminiFileListRequest(1665-1669)GeminiFileRetrieveRequest(1672-1674)GeminiFileDeleteRequest(1677-1679)GeminiBatchCreateRequestSDK(1596-1602)GeminiBatchListRequestSDK(1633-1636)GeminiBatchRetrieveRequestSDK(1639-1641)GeminiBatchCancelRequestSDK(1644-1646)GeminiBatchDeleteRequestSDK(1649-1651)core/providers/gemini/batch.go (4)
ToGeminiBatchJobResponse(871-896)ToGeminiBatchListResponse(936-972)ToGeminiBatchRetrieveResponse(899-933)ToGeminiBatchCancelResponse(975-984)
core/providers/cohere/batch.go (4)
core/providers/cohere/cohere.go (1)
CohereProvider(67-73)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/ollama/batch.go (3)
core/providers/ollama/ollama.go (1)
OllamaProvider(18-23)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/cerebras/batch.go (1)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
🔇 Additional comments (15)
core/providers/gemini/types.go (1)
1578-1589: LGTM! Job state constants properly defined.The new job state constants follow a consistent naming pattern and cover all necessary lifecycle states for batch jobs.
core/providers/openai/batch.go (1)
597-600: LGTM! Consistent unsupported operation implementation.The BatchDelete method correctly returns an unsupported operation error, consistent with OpenAI's current API capabilities and the pattern used across other providers.
core/providers/anthropic/batch.go (1)
783-786: LGTM! Consistent with other batch methods.The BatchDelete implementation follows the established pattern for unsupported operations in the Anthropic provider.
core/providers/cohere/batch.go (1)
35-38: LGTM! Maintains consistency with other batch operations.The BatchDelete method correctly returns an unsupported operation error, maintaining consistency with all other batch methods in this provider.
core/providers/perplexity/batch.go (1)
35-38: LGTM! Correct unsupported operation handling.The implementation is consistent with other batch operations in the Perplexity provider.
core/schemas/provider.go (1)
182-182: LGTM! Complete and consistent integration of BatchDelete.The changes properly integrate BatchDelete into:
- The AllowedRequests struct for operation gating
- The IsOperationAllowed switch case for validation
- The Provider interface for implementation
All additions follow the established patterns for other batch operations.
Also applies to: 231-232, 350-351
core/providers/cerebras/batch.go (1)
35-38: LGTM! Consistent with provider patterns.The BatchDelete implementation correctly returns an unsupported operation error, matching the pattern for all other batch operations in the Cerebras provider.
core/providers/ollama/batch.go (1)
35-38: LGTM! Final provider implementation is consistent.The BatchDelete method correctly implements the unsupported operation pattern, completing the consistent interface implementation across all providers.
core/bifrost.go (1)
1133-1209: BatchDeleteRequest implementation is consistent with existing Batch APIs*Validation, provider/config lookup, key selection, and retry/error-enrichment all mirror the established patterns for retrieve/cancel/results; this looks correct and cohesive with the existing batch surface.
core/providers/parasail/batch.go (1)
35-38: Parasail BatchDelete stub matches existing unsupported batch patternThe BatchDelete method correctly advertises unsupported status using the shared NewUnsupportedOperationError helper, consistent with the other Parasail batch operations.
core/providers/elevenlabs/batch.go (1)
35-38: Elevenlabs BatchDelete stub is wired consistentlyThe BatchDelete implementation cleanly follows the existing pattern for unsupported Elevenlabs batch operations and aligns with the Provider interface.
transports/bifrost-http/integrations/router.go (1)
81-89: BatchDelete routing and conversion hooks are correctly plumbedThe additional
DeleteRequestfield onBatchRequest, theBatchDeleteResponseConvertertype and RouteConfig field, and the newBatchDeleteRequestbranch inhandleBatchRequestall follow the existing batch patterns (create/list/retrieve/cancel/results) and should integrate cleanly with the new core.BatchDeleteRequest API.Also applies to: 159-162, 271-292, 820-841
core/providers/gemini/batch.go (1)
986-1043: Gemini BatchDelete implementation looks correct and aligned with other batch opsThe new
BatchDelete:
- Enforces operation-allowed and
batch_idpresence.- Builds the correct
DELETEURL for both raw IDs andbatches/...resource names.- Uses
x-goog-api-keyconsistently with other Gemini batch calls.- Treats 200/204 as success and returns a typed
BifrostBatchDeleteResponsewith enrichedExtraFields.This is consistent with the existing Gemini batch behaviors.
transports/bifrost-http/integrations/genai.go (2)
438-661: New batch API routes look structurally sound.The
CreateGenAIBatchRouteConfigsfunction properly implements:
- Batch create with both file-based and inline request support (lines 463-482)
- Batch list with pagination (lines 503-543)
- Batch retrieve, cancel, and delete operations
- Proper provider extraction and context propagation
- Response converters that check for raw responses before converting
The route structure follows the established patterns in this file.
1033-1042: Session access pattern is safe; concurrent reuse is not possible.Session IDs are generated from 16 cryptographically random bytes (line 48), making reuse extremely unlikely. Within a single request,
parseGeminiResumableUploadPhase2(line 1033) andextractGeminiResumableUploadParams(line 1097-1105) are called sequentially as RequestParser and PreCallback in the same request handler, not concurrently. Sessions are deleted immediately after use (line 1105), preventing reuse across requests. The cleanup goroutine (lines 57-60) removes sessions only after 1 hour, which doesn't affect active request processing.
| func ToGeminiJobState(status schemas.BatchStatus) string { | ||
| switch status { | ||
| case schemas.BatchStatusValidating: | ||
| return GeminiJobStatePending | ||
| case schemas.BatchStatusInProgress: | ||
| return GeminiJobStateRunning | ||
| case schemas.BatchStatusFinalizing: | ||
| return GeminiJobStateRunning | ||
| case schemas.BatchStatusCompleted: | ||
| return GeminiJobStateSucceeded | ||
| case schemas.BatchStatusFailed: | ||
| return GeminiJobStateFailed | ||
| case schemas.BatchStatusCancelling: | ||
| return GeminiJobStateCancelling | ||
| case schemas.BatchStatusCancelled: | ||
| return GeminiJobStateCancelled | ||
| case schemas.BatchStatusExpired: | ||
| return GeminiJobStateFailed | ||
| default: | ||
| return GeminiJobStatePending | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix BatchStats computation to avoid negative pending counts
The SDK converters generally look good, but there’s a corner case in the stats math:
- In
BatchRetrieve,BifrostBatchRetrieveResponse.RequestCountsis populated withCompletedandFailed, whileTotalis left at its zero value. ToGeminiBatchRetrieveResponseandToGeminiBatchListResponsecurrently useresp.RequestCounts.Totaldirectly to deriveRequestCountandPendingRequestCount.
When Total == 0 and Completed > 0, PendingRequestCount = Total - Completed becomes negative, which is invalid.
Consider defensively deriving totals and pending counts, for example:
- result.Metadata = &GeminiBatchMetadata{
- Name: resp.ID,
- State: ToGeminiJobState(resp.Status),
- CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339),
- BatchStats: &GeminiBatchStats{
- RequestCount: resp.RequestCounts.Total,
- PendingRequestCount: resp.RequestCounts.Total - resp.RequestCounts.Completed,
- SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed,
- },
- }
+ total := resp.RequestCounts.Total
+ if total == 0 {
+ total = resp.RequestCounts.Completed + resp.RequestCounts.Failed
+ }
+ success := resp.RequestCounts.Completed
+ if success < 0 {
+ success = 0
+ }
+ pending := total - (success + resp.RequestCounts.Failed)
+ if pending < 0 {
+ pending = 0
+ }
+
+ result.Metadata = &GeminiBatchMetadata{
+ Name: resp.ID,
+ State: ToGeminiJobState(resp.Status),
+ CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339),
+ BatchStats: &GeminiBatchStats{
+ RequestCount: total,
+ PendingRequestCount: pending,
+ SuccessfulRequestCount: success,
+ },
+ }and mirror the same pattern inside ToGeminiBatchListResponse when populating BatchStats, so the invariants RequestCount ≥ 0, PendingRequestCount ≥ 0, and RequestCount ≈ success + failures + pending always hold.
Also applies to: 899-919, 941-958
🤖 Prompt for AI Agents
core/providers/gemini/batch.go lines ~847-868 (and similarly at 899-919,
941-958): the current stats math can produce negative PendingRequestCount when
resp.RequestCounts.Total is zero but Completed/Failed are non-zero; instead
derive totals defensively: compute total := resp.RequestCounts.Total; if total
== 0 { total = resp.RequestCounts.Completed + resp.RequestCounts.Failed +
resp.RequestCounts.Pending }; compute pending := total -
resp.RequestCounts.Completed - resp.RequestCounts.Failed; if pending < 0 {
pending = 0 }; then set RequestCount = total and PendingRequestCount = pending
(ensuring RequestCount ≥ 0, PendingRequestCount ≥ 0 and RequestCount ≈ success +
failures + pending). Apply the same change to ToGeminiBatchListResponse and any
other places noted.
| copy(uploadReq.File, body) | ||
| uploadReq.Filename = session.Filename | ||
| uploadReq.MimeType = session.MimeType | ||
| uploadReq.Purpose = "batch" // Default purpose for file uploads via GenAI API |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardcoded purpose may be incorrect for non-batch uploads.
The purpose is hardcoded to "batch" for all file uploads via the resumable upload API. If this endpoint is used for other purposes (e.g., general file storage for chat context), this default may be incorrect. Consider allowing the purpose to be specified in the metadata during phase 1.
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/genai.go around line 1056, the file
upload purpose is hardcoded to "batch", which can be incorrect for non-batch
uploads; change the code to read the purpose from the phase-1 metadata (e.g.,
check uploadReq.Metadata or incoming request metadata) and set uploadReq.Purpose
to that value if present, otherwise fall back to "batch"; additionally
validate/normalize the provided purpose against allowed values (or sanitize it)
before assigning to uploadReq.Purpose so invalid values are not sent to the
GenAI API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (7)
transports/bifrost-http/integrations/router.go (2)
868-892: Debug statements flagged in previous review.These
fmt.Printfdebug statements were already flagged in a past review comment. They should be removed or replaced with structured logging before merging.
1001-1007: Debug statements flagged in previous review.These
fmt.Printfdebug statements were already flagged in a past review comment. They should be removed before merging.transports/bifrost-http/integrations/genai.go (5)
45-49: Error fromcrypto/rand.Readnot handled (flagged in previous review).This was already flagged in a past review. Ignoring the error could lead to predictable session IDs if the random source fails.
240-254: Debug statements flagged in previous review.These
fmt.Printfdebug statements were already flagged in a past review comment. They should be removed or replaced with structured logging.
792-861: Debug statements flagged in previous review.Multiple
fmt.Printfdebug statements inparseGeminiResumableUploadand related functions were flagged in a past review. They should be removed or replaced with structured logging.
927-930: Binary data corruption flagged in previous review.The conversion through
string(body)at line 887 and back to[]byte(content)at line 929 corrupts binary file uploads. This was already flagged in a past review.
1066-1079: Remove debug statements fromsetResumableUploadFinalStatus.These debug statements should be removed before merging.
🧹 Nitpick comments (2)
transports/bifrost-http/integrations/genai.go (2)
217-237: Consider extracting duplicate FileRequestConverter logic.The
FileRequestConverterclosures at lines 217-237, 273-293 are nearly identical. Consider extracting to a shared helper function to reduce duplication.// Helper to create FileRequest from GeminiFileUploadRequest func createFileUploadRequest(ctx *context.Context, geminiReq *gemini.GeminiFileUploadRequest) (*FileRequest, error) { provider := schemas.Gemini if p := (*ctx).Value(bifrostContextKeyProvider); p != nil { provider = p.(schemas.ModelProvider) } return &FileRequest{ Type: schemas.FileUploadRequest, UploadRequest: &schemas.BifrostFileUploadRequest{ Provider: provider, File: geminiReq.File, Filename: geminiReq.Filename, Purpose: geminiReq.Purpose, }, }, nil }
1011-1064: Remove debug statements and clean up session handling.The debug statements here should be removed. Additionally, the comment on line 1059 notes that the session shouldn't be deleted here, but
extractGeminiResumableUploadParams(line 1105) deletes it later. Ensure this ordering is always correct to avoid race conditions.Remove the debug statements and consider adding a brief code comment clarifying the session lifecycle (created in phase 1, used in phase 2 parser, deleted in phase 2 PreCallback).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (25)
core/bifrost.go(1 hunks)core/providers/anthropic/batch.go(1 hunks)core/providers/azure/batch.go(1 hunks)core/providers/bedrock/batch.go(1 hunks)core/providers/cerebras/batch.go(1 hunks)core/providers/cohere/batch.go(1 hunks)core/providers/elevenlabs/batch.go(1 hunks)core/providers/gemini/batch.go(1 hunks)core/providers/gemini/files.go(1 hunks)core/providers/gemini/types.go(1 hunks)core/providers/groq/batch.go(1 hunks)core/providers/mistral/batch.go(1 hunks)core/providers/ollama/batch.go(1 hunks)core/providers/openai/batch.go(1 hunks)core/providers/openrouter/batch.go(1 hunks)core/providers/parasail/batch.go(1 hunks)core/providers/perplexity/batch.go(1 hunks)core/providers/sgl/batch.go(1 hunks)core/providers/vertex/batch.go(1 hunks)core/schemas/batch.go(1 hunks)core/schemas/bifrost.go(1 hunks)core/schemas/provider.go(3 hunks)transports/bifrost-http/integrations/genai.go(14 hunks)transports/bifrost-http/integrations/router.go(7 hunks)transports/bifrost-http/integrations/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (7)
- transports/bifrost-http/integrations/utils.go
- core/providers/cohere/batch.go
- core/providers/sgl/batch.go
- core/schemas/bifrost.go
- core/providers/cerebras/batch.go
- core/providers/anthropic/batch.go
- core/providers/mistral/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/schemas/provider.gocore/providers/ollama/batch.gocore/providers/azure/batch.gocore/providers/parasail/batch.gocore/schemas/batch.gocore/providers/openai/batch.gocore/providers/vertex/batch.gocore/providers/groq/batch.gocore/providers/openrouter/batch.gocore/bifrost.gocore/providers/bedrock/batch.gocore/providers/perplexity/batch.gocore/providers/gemini/batch.gotransports/bifrost-http/integrations/router.gotransports/bifrost-http/integrations/genai.gocore/providers/elevenlabs/batch.gocore/providers/gemini/files.gocore/providers/gemini/types.go
🧬 Code graph analysis (16)
core/schemas/provider.go (2)
core/schemas/bifrost.go (1)
BatchDeleteRequest(103-103)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)
core/providers/ollama/batch.go (4)
core/providers/ollama/ollama.go (1)
OllamaProvider(18-23)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/azure/batch.go (4)
core/providers/azure/azure.go (1)
AzureProvider(23-28)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/parasail/batch.go (4)
core/providers/parasail/parasail.go (1)
ParasailProvider(17-22)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/schemas/batch.go (2)
core/schemas/provider.go (1)
Provider(313-362)core/schemas/bifrost.go (2)
ModelProvider(32-32)BifrostResponseExtraFields(295-304)
core/providers/openai/batch.go (4)
core/providers/openai/openai.go (1)
OpenAIProvider(24-30)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/vertex/batch.go (4)
core/providers/vertex/vertex.go (1)
VertexProvider(57-62)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/groq/batch.go (2)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)
core/bifrost.go (3)
core/schemas/bifrost.go (5)
BatchDeleteRequest(103-103)BifrostError(364-373)ErrorField(382-389)BifrostErrorExtraFields(431-435)RequestType(83-83)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/provider.go (2)
Provider(313-362)CustomProviderConfig(248-254)
core/providers/bedrock/batch.go (4)
core/providers/bedrock/bedrock.go (1)
BedrockProvider(29-35)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/perplexity/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/gemini/batch.go (3)
core/schemas/batch.go (15)
BatchStatus(5-5)BatchStatusValidating(8-8)BatchStatusInProgress(10-10)BatchStatusFinalizing(11-11)BatchStatusCompleted(12-12)BatchStatusFailed(9-9)BatchStatusCancelling(14-14)BatchStatusCancelled(15-15)BatchStatusExpired(13-13)BifrostBatchCreateResponse(85-109)BifrostBatchRetrieveResponse(152-187)BifrostBatchListResponse(129-140)BifrostBatchCancelResponse(199-208)BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/providers/gemini/types.go (10)
GeminiJobStatePending(1582-1582)GeminiJobStateRunning(1583-1583)GeminiJobStateSucceeded(1584-1584)GeminiJobStateFailed(1585-1585)GeminiJobStateCancelling(1586-1586)GeminiJobStateCancelled(1587-1587)GeminiBatchJobResponseSDK(1618-1624)GeminiBatchMetadata(1499-1512)GeminiBatchStats(1433-1437)GeminiBatchDest(1533-1536)core/providers/utils/utils.go (3)
CheckOperationAllowed(408-420)SetExtraHeaders(178-208)MakeRequestWithContext(39-93)
transports/bifrost-http/integrations/router.go (4)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)transports/bifrost-http/integrations/genai.go (1)
ErrResumableUploadInit(36-36)core/schemas/bifrost.go (2)
BatchDeleteRequest(103-103)FileUploadRequest(104-104)core/schemas/provider.go (1)
Provider(313-362)
transports/bifrost-http/integrations/genai.go (4)
core/providers/gemini/types.go (2)
GeminiFileUploadRequest(1656-1662)Type(779-779)core/schemas/files.go (2)
BifrostFileUploadRequest(53-66)BifrostFileUploadResponse(89-105)core/providers/gemini/files.go (1)
ToGeminiFileUploadResponse(472-485)core/providers/gemini/batch.go (3)
ToGeminiBatchJobResponse(871-896)ToGeminiBatchListResponse(936-972)ToGeminiBatchRetrieveResponse(899-933)
core/providers/elevenlabs/batch.go (4)
core/providers/elevenlabs/elevenlabs.go (1)
ElevenlabsProvider(23-29)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/gemini/files.go (1)
core/schemas/files.go (1)
BifrostFileUploadResponse(89-105)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
🔇 Additional comments (30)
core/providers/gemini/types.go (2)
1578-1589: LGTM! Clear distinction between SDK and internal batch states.The new
GeminiJobState*constants are well-documented as "Google GenAI SDK job states" and properly differentiated from the existingGeminiBatchState*constants. The naming convention and value format are consistent.
1656-1662: LGTM! File upload enhancements align with resumable upload support.The addition of
MimeTypeandResumableSessionIDfields appropriately extends the file upload functionality. TheMimeTypefield enables explicit MIME type handling, whileResumableSessionID(marked as internal withjson:"-") supports resumable upload sessions.core/providers/vertex/batch.go (1)
35-38: LGTM! Consistent with other unsupported batch operations.The
BatchDeleteimplementation follows the same pattern as other batch methods in this file, correctly returning an unsupported operation error.core/providers/azure/batch.go (1)
438-441: LGTM! Follows the established pattern.The
BatchDeletestub correctly signals that Azure does not support this operation.core/providers/openrouter/batch.go (1)
35-38: LGTM! Consistent stub implementation.core/providers/bedrock/batch.go (1)
932-935: LGTM! Consistent implementation.core/schemas/provider.go (2)
182-182: LGTM! Proper extension of AllowedRequests.The
BatchDeletefield is correctly placed among other batch operations and follows the established naming convention.
231-232: LGTM! Complete and consistent Provider interface extension.The
BatchDeletecapability is properly integrated across all three required locations:
AllowedRequests.BatchDeletefield (line 182)IsOperationAllowedcase handler (lines 231-232)Providerinterface method (lines 350-351)The implementation follows the established patterns for other batch operations.
Also applies to: 350-351
core/providers/groq/batch.go (1)
35-38: LGTM! Consistent stub implementation.core/providers/openai/batch.go (1)
597-600: LGTM! Completes the consistent BatchDelete stub pattern across all providers.All reviewed providers (Vertex, Azure, OpenRouter, Bedrock, Groq, and OpenAI) implement
BatchDeleteas an unsupported operation using the same pattern. This consistency is excellent for maintainability.core/providers/perplexity/batch.go (1)
35-38: LGTM!The
BatchDeletestub implementation follows the established pattern used by other batch methods in this file, correctly returning an unsupported operation error with the appropriate request type constant.core/providers/ollama/batch.go (1)
35-38: LGTM!The
BatchDeletestub implementation is consistent with the other batch method stubs in this file and correctly implements the Provider interface requirement.core/schemas/batch.go (1)
265-281: LGTM!The
BifrostBatchDeleteRequestandBifrostBatchDeleteResponsetypes are well-structured and follow the established patterns from other batch request/response types in this file. The request mirrorsBifrostBatchRetrieveRequestandBifrostBatchCancelRequest, while the response appropriately includes aDeletedconfirmation field.core/bifrost.go (1)
1133-1209: LGTM!The
BatchDeleteRequestimplementation follows the established pattern from other batch methods (BatchRetrieveRequest,BatchCancelRequest,BatchResultsRequest):
- Consistent nil/empty validation for request, provider, and batch_id
- Proper context defaulting
- Same base provider type resolution logic for custom providers
- Correct key selection pattern
- Appropriate use of
executeRequestWithRetries- Error extra fields populated consistently
core/providers/parasail/batch.go (1)
35-38: LGTM!The
BatchDeletestub implementation is consistent with the other batch method stubs and correctly follows the established provider pattern.core/providers/elevenlabs/batch.go (1)
35-38: LGTM!The
BatchDeletestub correctly follows the same pattern as other unsupported batch operations in this provider, returning the appropriate error with consistent parameter usage.core/providers/gemini/files.go (3)
466-485: LGTM!The new wrapper type and updated converter correctly align with Google's API response format. The structured approach with
GeminiFileUploadResponseWrapperis cleaner than returning a raw map.
487-519: LGTM!The new
ToGeminiFileListResponseandToGeminiFileRetrieveResponseconverters follow consistent patterns and correctly map Bifrost fields to Gemini's SDK format.
521-549: LGTM!The helper functions
toGeminiFileState,formatGeminiTimestamp, andsafeDerefInt64are clean, handle edge cases appropriately (zero timestamp returns empty string, nil pointer returns 0), and improve code readability.transports/bifrost-http/integrations/router.go (4)
88-88: LGTM!The
DeleteRequestfield addition toBatchRequestfollows the established pattern for other batch operation fields.
159-162: LGTM!The
BatchDeleteResponseConvertertype definition is consistent with other batch response converter types in this file.
422-426: LGTM!Proper sentinel error handling for
ErrResumableUploadInitto short-circuit processing when a resumable upload initialization has already been handled by the PreCallback.
820-841: LGTM!The
BatchDeleteRequestcase follows the same pattern as other batch operations (BatchCreate, BatchList, etc.) with proper nil checks, error handling, post-callback execution, and response conversion.core/providers/gemini/batch.go (3)
846-868: LGTM!The
ToGeminiJobStatefunction provides a comprehensive mapping from Bifrost batch statuses to Gemini SDK job states. MappingBatchStatusExpiredtoGeminiJobStateFailedis a reasonable choice since Gemini doesn't have an explicit expired state.
986-1043: LGTM!The
BatchDeleteimplementation follows the established pattern from other batch operations:
- Proper operation allowed check
- Input validation for empty batch ID
- Correct URL construction handling both prefixed and unprefixed batch IDs
- Appropriate HTTP DELETE method
- Handles both 200 OK and 204 No Content success responses
887-892: TheSuccessfulRequestCountcalculation is correct. TheCompletedfield inBatchRequestCountsrepresents all finished requests (both successful and failed), not only successful completions. The formulaCompleted - Failedaccurately computes the successful count. This is consistent across all providers: Anthropic maps its Succeeded + Errored to Completed, and OpenAI/Azure use Completed to include all finished requests regardless of outcome.Likely an incorrect or invalid review comment.
transports/bifrost-http/integrations/genai.go (4)
23-67: Session management implementation looks good with one consideration.The
uploadSessiontype and cleanup goroutine provide appropriate session lifecycle management. The 1-hour expiration with 10-minute cleanup interval is reasonable.Note: The cleanup goroutine started in
init()runs for the lifetime of the process, which is acceptable for a server application. Consider documenting this behavior.
438-501: LGTM - Batch create route configuration.The batch create endpoint configuration properly handles both file-based (
srcas string) and inline (srcas array) request formats, with correct provider extraction from context.
503-543: LGTM - Batch list route configuration.The batch list endpoint properly extracts pagination parameters and converts to Bifrost format.
621-661: LGTM - Batch delete route configuration.The batch delete endpoint follows the established pattern for batch operations with proper provider extraction and response handling (returns empty object on success).
| // GeminiBatchCreateRequestSDK represents the SDK format for batch create requests. | ||
| // The SDK sends: batches.create(model="...", src=[...] or src="files/...") | ||
| type GeminiBatchCreateRequestSDK struct { | ||
| Model string `json:"model,omitempty"` | ||
| // Src can be either: | ||
| // - A string like "files/display_name" for file-based input | ||
| // - An array of inline request objects | ||
| Src interface{} `json:"src,omitempty"` | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for functions that handle GeminiBatchCreateRequestSDK
echo "=== Functions handling GeminiBatchCreateRequestSDK ==="
rg -n "GeminiBatchCreateRequestSDK" --type=go core/providers/gemini/
echo -e "\n=== Type assertions and switches on Src ==="
rg -n -C3 "\.Src|type assert|switch.*type" --type=go core/providers/gemini/ | head -100
echo -e "\n=== Looking for conversion/validation functions ==="
ast-grep --pattern $'func $_(req $_ GeminiBatchCreateRequestSDK $_) $$$'Repository: maximhq/bifrost
Length of output: 924
🏁 Script executed:
#!/bin/bash
# Search for all batch-related functions and converters
echo "=== Batch-related conversion functions ==="
rg -n "batch|Batch" --type=go core/providers/gemini/ | grep -i "func\|convert" | head -30
echo -e "\n=== Custom UnmarshalJSON implementations ==="
rg -n "UnmarshalJSON" --type=go core/providers/gemini/
echo -e "\n=== Files in gemini provider directory ==="
ls -la core/providers/gemini/
echo -e "\n=== Search for any handling of Src field in entire gemini provider ==="
rg -n "Src" --type=go core/providers/gemini/ -B2 -A2Repository: maximhq/bifrost
Length of output: 6507
🏁 Script executed:
#!/bin/bash
# Read the BatchCreate function to see how it handles GeminiBatchCreateRequestSDK
echo "=== BatchCreate function implementation ==="
sed -n '124,250p' core/providers/gemini/batch.go
echo -e "\n=== buildBatchRequestItems function ==="
sed -n '57,122p' core/providers/gemini/batch.goRepository: maximhq/bifrost
Length of output: 6307
🏁 Script executed:
#!/bin/bash
# Search for all usages of GeminiBatchCreateRequestSDK
echo "=== All references to GeminiBatchCreateRequestSDK ==="
rg -n "GeminiBatchCreateRequestSDK" --type=go
echo -e "\n=== Check if any SDK types are actually used ==="
rg -n "GeminiBatchCreateRequestSDK|GeminiBatchInlineRequest|GeminiBatchJobResponseSDK" --type=go core/providers/gemini/
echo -e "\n=== Check test file ==="
cat core/providers/gemini/batch_test.goRepository: maximhq/bifrost
Length of output: 3264
🏁 Script executed:
#!/bin/bash
# Read the genai.go file around the usage points
echo "=== Line 448 area ==="
sed -n '430,470p' transports/bifrost-http/integrations/genai.go
echo -e "\n=== Line 451 area (more context) ==="
sed -n '445,480p' transports/bifrost-http/integrations/genai.go
echo -e "\n=== Line 678 area ==="
sed -n '670,710p' transports/bifrost-http/integrations/genai.goRepository: maximhq/bifrost
Length of output: 4475
🏁 Script executed:
#!/bin/bash
# Read the complete BatchCreateRequestConverter function
echo "=== Complete BatchCreateRequestConverter function ==="
sed -n '451,550p' transports/bifrost-http/integrations/genai.goRepository: maximhq/bifrost
Length of output: 3604
🏁 Script executed:
#!/bin/bash
# Check if there's validation in BatchCreate that requires InputFileID or Requests
echo "=== Validation in core provider BatchCreate ==="
sed -n '140,155p' core/providers/gemini/batch.goRepository: maximhq/bifrost
Length of output: 660
Src field handling lacks explicit error messaging for invalid types.
The Src field in GeminiBatchCreateRequestSDK is processed via type switch in transports/bifrost-http/integrations/genai.go (lines 451–481), supporting both string (file reference) and []interface{} (inline requests) cases. However, three issues exist:
- Invalid
Srctypes that are neither string nor[]interface{}are silently ignored, resulting in emptyRequestsandInputFileIDbeing passed to the core provider. - Non-map items within the
[]interface{}array are silently skipped without indication to the user. - The downstream validation in
BatchCreatecatches this with the error "either input_file_id or requests must be provided", but this doesn't clarify that the issue originated from an invalidSrcformat.
Add explicit validation in the HTTP transport layer to return a clear error for unsupported Src types (e.g., "src must be a string (file reference) or array of request objects").
🤖 Prompt for AI Agents
In core/providers/gemini/types.go around lines 1594 to 1602 and in
transports/bifrost-http/integrations/genai.go around lines 451 to 481, the Src
field handling silently ignores unsupported types and non-map array items;
update the transport layer to validate Src explicitly: if Src is a string treat
as file ref, if it's an array iterate and require each item be a map/object and
return a clear HTTP 400 error when any item is not a map, and if Src is any
other type return a clear HTTP 400 error like "src must be a string (file
reference) or array of request objects" so the client receives a precise
validation message instead of downstream generic errors.
| // Build the upload URL for phase 2 | ||
| // Use the request's host and scheme to build the URL | ||
| scheme := "http" | ||
| if ctx.IsTLS() { | ||
| scheme = "https" | ||
| } | ||
| host := string(ctx.Host()) | ||
| uploadURL := fmt.Sprintf("%s://%s/genai/upload/v1beta/files/resumable/%s", scheme, host, uploadReq.ResumableSessionID) | ||
|
|
||
| fmt.Printf("[DEBUG] extractGeminiFileUploadParams: returning upload URL=%s\n", uploadURL) | ||
|
|
||
| // Send the upload URL response | ||
| ctx.Response.Header.Set("X-Goog-Upload-URL", uploadURL) | ||
| ctx.Response.Header.Set("X-Goog-Upload-Status", "active") | ||
| ctx.Response.Header.SetContentType("application/json") | ||
| ctx.SetStatusCode(200) | ||
|
|
||
| // Return empty JSON object as response body | ||
| ctx.SetBody([]byte("{}")) | ||
|
|
||
| // Mark that response was written | ||
| *bifrostCtx = context.WithValue(*bifrostCtx, contextKeyResponseWritten{}, true) | ||
|
|
||
| // Return sentinel error to signal router to skip further processing | ||
| return ErrResumableUploadInit | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Check if there's existing handling for forwarded headers in the codebase
rg -n -i "x-forwarded|forwarded-host|forwarded-proto" --type goRepository: maximhq/bifrost
Length of output: 294
🏁 Script executed:
# Look at the X-Forwarded-Proto handling in session.go
sed -n '135,180p' transports/bifrost-http/handlers/session.goRepository: maximhq/bifrost
Length of output: 1471
🏁 Script executed:
# Search for X-Forwarded-Host in the codebase
rg -n "X-Forwarded-Host" --type go
# Also check if ctx.Host() is used in genai.go or if there are URL construction patterns
rg -n "ctx\.Host\(\)" transports/bifrost-http/integrations/genai.go
# Check the broader pattern in genai.go
rg -n "fmt\.Sprintf.*://" transports/bifrost-http/integrations/genai.go | head -20Repository: maximhq/bifrost
Length of output: 220
🏁 Script executed:
# Check for README or documentation
find . -type f -name "README*" -o -name "*.md" | head -20
# Search for any comments or issues about reverse proxy in the codebase
rg -n -i "reverse.?proxy|proxy|forwarded" --type go --type md | head -30Repository: maximhq/bifrost
Length of output: 4570
🏁 Script executed:
# Look at more context around the problematic code in genai.go
sed -n '1120,1170p' transports/bifrost-http/integrations/genai.go
# Check if there are other places in genai.go that handle ctx.IsTLS()
rg -n "ctx\.IsTLS\(\)" transports/bifrost-http/integrations/genai.goRepository: maximhq/bifrost
Length of output: 2080
🏁 Script executed:
# Search for other URL constructions with scheme/host in handlers and integrations
rg -n "fmt\.Sprintf.*https?://" transports/bifrost-http/ --type go | head -20
# Also check if there are any other genai file upload related functions
rg -n "X-Goog-Upload" transports/bifrost-http/ --type goRepository: maximhq/bifrost
Length of output: 971
🏁 Script executed:
# Search for utility functions that might handle forwarded headers
rg -n "func.*Forward|func.*Scheme|func.*Host" transports/bifrost-http/ --type go | grep -i "forward\|scheme"
# Check if there's a common patterns file or utils
find transports/bifrost-http -name "*util*" -o -name "*helper*" -o -name "*common*" | head -10Repository: maximhq/bifrost
Length of output: 146
🏁 Script executed:
# Check handlers/utils.go
cat -n transports/bifrost-http/handlers/utils.go | head -80
# Check integrations/utils.go
cat -n transports/bifrost-http/integrations/utils.go | head -80Repository: maximhq/bifrost
Length of output: 6347
Support reverse proxy headers for resumable upload URL construction.
The upload URL at line 1145 uses ctx.Host() and ctx.IsTLS() directly without considering X-Forwarded-Proto and X-Forwarded-Host headers. When Bifrost runs behind a reverse proxy, this URL will reference internal host/scheme instead of the client-facing address, causing the client's subsequent upload requests to fail. Follow the pattern from session.go (lines 139, 175) which checks X-Forwarded-Proto for HTTPS detection, and similarly add support for X-Forwarded-Host.
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/genai.go around lines 1138 to 1163, the
upload URL is built using ctx.Host() and ctx.IsTLS() which breaks when behind a
reverse proxy; change the logic to first inspect request headers for
X-Forwarded-Proto (use "https" if it contains "https", otherwise "http") and
X-Forwarded-Host (use its value if present and non-empty), falling back to
ctx.IsTLS() and ctx.Host() only if those headers are missing; construct
uploadURL from the chosen scheme and host and keep the rest of the response flow
identical.
54239ed to
aa57668
Compare
3bf5d39 to
9e1cf19
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (4)
core/providers/gemini/batch.go (1)
887-891: Address the BatchStats computation issue to prevent negative counts.The SDK converters still contain the issue previously flagged: when
resp.RequestCounts.Totalis 0 butCompletedorFailedare non-zero, the math produces negativePendingRequestCountvalues.This affects:
ToGeminiBatchJobResponse(lines 887-891)ToGeminiBatchRetrieveResponse(lines 914-918)ToGeminiBatchListResponse(lines 953-957)Apply defensive computation as suggested in the previous review to ensure valid, non-negative counts.
Also applies to: 914-918, 953-957
transports/bifrost-http/integrations/router.go (2)
868-892: Remove debugfmt.Printfstatements before merging.These debug statements should be removed or replaced with structured logging before production. This was flagged in a previous review.
1001-1001: Remove debugfmt.Printfstatement before merging.This debug statement should be removed or replaced with structured logging before production. This was flagged in a previous review.
transports/bifrost-http/integrations/genai.go (1)
463-482: Validate Src field type explicitly to provide clear error messages.The
Srcfield is processed via type switch but silently ignores unsupported types, resulting in emptyRequestsandInputFileID. This leads to a generic downstream error instead of a clear validation message.Based on previous review feedback, add explicit validation:
// Handle src field - can be string (file reference) or array (inline requests) switch src := sdkReq.Src.(type) { case string: // File-based input: src="files/display_name" bifrostReq.InputFileID = strings.TrimPrefix(src, "files/") case []interface{}: // Inline requests: src=[{contents: [...], config: {...}}] requests := make([]schemas.BatchRequestItem, 0, len(src)) for i, item := range src { if itemMap, ok := item.(map[string]interface{}); ok { customID := fmt.Sprintf("request-%d", i) requests = append(requests, schemas.BatchRequestItem{ CustomID: customID, Body: itemMap, }) + } else { + return nil, fmt.Errorf("src array item %d must be an object, got %T", i, item) } } bifrostReq.Requests = requests +default: + if sdkReq.Src != nil { + return nil, errors.New("src must be a string (file reference) or array of request objects") + } }
🧹 Nitpick comments (2)
core/providers/gemini/files.go (1)
472-485: Consider preserving actual MIME type instead of hardcoding.
ToGeminiFileUploadResponsehardcodesMimeTypeas"application/octet-stream". TheBifrostFileUploadResponseschema doesn't appear to carry the original MIME type, but if the upload request or response contains this information, preserving it would provide more accurate metadata to clients.If the MIME type is available elsewhere (e.g., from the original request or stored in extra fields), consider passing it through:
- MimeType: "application/octet-stream", + MimeType: getMimeTypeOrDefault(resp), // or pass as parametertransports/bifrost-http/integrations/genai.go (1)
174-180: Consider extracting repeated provider resolution into a helper function.The pattern of extracting provider from context with a Gemini default is repeated across multiple route converters. This could be simplified with a helper function.
// Helper function to add at the top of the file func getProviderFromContext(ctx *context.Context) schemas.ModelProvider { if p := (*ctx).Value(bifrostContextKeyProvider); p != nil { return p.(schemas.ModelProvider) } return schemas.Gemini }Then in each converter:
-// Get provider from context -provider := schemas.Gemini -if p := (*ctx).Value(bifrostContextKeyProvider); p != nil { - provider = p.(schemas.ModelProvider) -} +provider := getProviderFromContext(ctx)Also applies to: 220-223, 276-279, 327-330, 368-371, 407-410
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (25)
core/bifrost.go(1 hunks)core/providers/anthropic/batch.go(1 hunks)core/providers/azure/batch.go(1 hunks)core/providers/bedrock/batch.go(1 hunks)core/providers/cerebras/batch.go(1 hunks)core/providers/cohere/batch.go(1 hunks)core/providers/elevenlabs/batch.go(1 hunks)core/providers/gemini/batch.go(1 hunks)core/providers/gemini/files.go(1 hunks)core/providers/gemini/types.go(1 hunks)core/providers/groq/batch.go(1 hunks)core/providers/mistral/batch.go(1 hunks)core/providers/ollama/batch.go(1 hunks)core/providers/openai/batch.go(1 hunks)core/providers/openrouter/batch.go(1 hunks)core/providers/parasail/batch.go(1 hunks)core/providers/perplexity/batch.go(1 hunks)core/providers/sgl/batch.go(1 hunks)core/providers/vertex/batch.go(1 hunks)core/schemas/batch.go(1 hunks)core/schemas/bifrost.go(1 hunks)core/schemas/provider.go(3 hunks)transports/bifrost-http/integrations/genai.go(14 hunks)transports/bifrost-http/integrations/router.go(9 hunks)transports/bifrost-http/integrations/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (10)
- transports/bifrost-http/integrations/utils.go
- core/providers/azure/batch.go
- core/providers/cohere/batch.go
- core/providers/sgl/batch.go
- core/schemas/bifrost.go
- core/providers/vertex/batch.go
- core/providers/mistral/batch.go
- core/schemas/batch.go
- core/providers/perplexity/batch.go
- core/providers/cerebras/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/providers/bedrock/batch.gocore/providers/anthropic/batch.gocore/bifrost.gocore/schemas/provider.gocore/providers/ollama/batch.gocore/providers/groq/batch.gocore/providers/openai/batch.gocore/providers/gemini/batch.gocore/providers/elevenlabs/batch.gocore/providers/parasail/batch.gocore/providers/gemini/files.gotransports/bifrost-http/integrations/genai.gotransports/bifrost-http/integrations/router.gocore/providers/openrouter/batch.gocore/providers/gemini/types.go
🧬 Code graph analysis (10)
core/providers/bedrock/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/anthropic/batch.go (4)
core/providers/anthropic/anthropic.go (1)
AnthropicProvider(21-28)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/bifrost.go (4)
core/schemas/bifrost.go (4)
BatchDeleteRequest(103-103)BifrostError(364-373)BifrostErrorExtraFields(431-435)RequestType(83-83)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/provider.go (2)
Provider(313-362)CustomProviderConfig(248-254)core/schemas/account.go (1)
Key(8-17)
core/schemas/provider.go (2)
core/schemas/bifrost.go (2)
BatchDeleteRequest(103-103)BifrostError(364-373)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)
core/providers/ollama/batch.go (4)
core/providers/ollama/ollama.go (1)
OllamaProvider(18-23)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/openai/batch.go (4)
core/providers/openai/openai.go (1)
OpenAIProvider(24-30)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/gemini/batch.go (3)
core/schemas/batch.go (11)
BatchStatus(5-5)BatchStatusValidating(8-8)BatchStatusInProgress(10-10)BatchStatusFinalizing(11-11)BatchStatusCompleted(12-12)BatchStatusFailed(9-9)BatchStatusCancelling(14-14)BatchStatusCancelled(15-15)BatchStatusExpired(13-13)BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/providers/gemini/types.go (11)
GeminiJobStatePending(1582-1582)GeminiJobStateRunning(1583-1583)GeminiJobStateSucceeded(1584-1584)GeminiJobStateFailed(1585-1585)GeminiJobStateCancelling(1586-1586)GeminiJobStateCancelled(1587-1587)GeminiBatchJobResponseSDK(1618-1624)GeminiBatchMetadata(1499-1512)GeminiBatchStats(1433-1437)GeminiBatchDest(1533-1536)GeminiBatchListResponseSDK(1627-1630)core/providers/utils/utils.go (2)
CheckOperationAllowed(408-420)NewBifrostOperationError(449-460)
core/providers/elevenlabs/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/parasail/batch.go (4)
core/providers/parasail/parasail.go (1)
ParasailProvider(17-22)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/gemini/files.go (2)
core/schemas/files.go (1)
BifrostFileUploadResponse(89-105)ui/lib/constants/logs.ts (1)
Status(133-133)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Graphite / mergeability_check
🔇 Additional comments (25)
core/providers/openrouter/batch.go (1)
35-38: LGTM! Consistent with existing batch operation stubs.The BatchDelete implementation correctly follows the established pattern for unsupported batch operations in the OpenRouter provider.
core/providers/bedrock/batch.go (1)
932-935: LGTM! Consistent stub for unsupported operation.The BatchDelete stub correctly indicates that batch deletion is not supported by the Bedrock provider, following the same pattern as other providers.
core/providers/groq/batch.go (1)
35-38: LGTM! Consistent with other unsupported batch operations.The BatchDelete stub appropriately returns an unsupported operation error, matching the pattern used for all batch operations in the Groq provider.
core/providers/openai/batch.go (1)
597-600: LGTM! Appropriate stub for unsupported operation.The BatchDelete stub correctly indicates that batch deletion is not supported by the OpenAI provider, following the established error handling pattern.
core/schemas/provider.go (1)
182-182: LGTM! Consistent integration of BatchDelete capability.The changes properly integrate BatchDelete into the provider framework:
- Added to AllowedRequests for operation gating
- Integrated into IsOperationAllowed switch statement
- Added to Provider interface with correct signature
All changes follow the established patterns for other batch operations.
Also applies to: 231-232, 350-351
core/providers/anthropic/batch.go (1)
783-786: LGTM! Appropriate stub for unsupported operation.The BatchDelete stub correctly indicates that batch deletion is not currently supported by the Anthropic provider, consistent with the error handling pattern used across providers.
core/providers/gemini/batch.go (1)
986-1043: LGTM! Well-implemented BatchDelete for Gemini provider.The implementation correctly:
- Validates required batch_id
- Builds the proper URL with batches/ prefix handling
- Uses DELETE HTTP method
- Sets appropriate headers
- Handles both 200 and 204 success status codes
- Returns a properly structured response
core/providers/parasail/batch.go (1)
35-38: LGTM! Consistent with other unsupported batch operations.The BatchDelete stub appropriately returns an unsupported operation error, matching the pattern used for all batch operations in the Parasail provider.
core/bifrost.go (1)
1133-1209: LGTM!The
BatchDeleteRequestmethod follows the established pattern used by other batch operations (BatchCancelRequest,BatchResultsRequest, etc.) with consistent:
- Input validation (nil check, provider required, batch_id required)
- Context defaulting
- Provider lookup and config retrieval
- Custom provider base type resolution
- Key selection for providers requiring keys
- Retry execution with proper request type
- Error field augmentation
core/providers/elevenlabs/batch.go (1)
35-38: LGTM!The
BatchDeletestub correctly follows the established pattern for unsupported operations, consistent with the other batch method stubs in this file.core/providers/ollama/batch.go (1)
35-38: LGTM!The
BatchDeletestub correctly follows the established pattern for unsupported operations, consistent with the other batch method stubs in this file.transports/bifrost-http/integrations/router.go (5)
88-88: LGTM!The
DeleteRequestfield addition toBatchRequestfollows the established pattern for other batch request types.
159-161: LGTM!The
BatchDeleteResponseConvertertype definition follows the established pattern for other batch response converters.
291-291: LGTM!The
BatchDeleteResponseConverterfield inRouteConfigfollows the established pattern for other batch response converters.
820-840: LGTM!The
BatchDeleteRequestcase inhandleBatchRequestfollows the exact same pattern as other batch request handlers (BatchCreateRequest,BatchCancelRequest, etc.), with consistent:
- Nil request validation
- Client method invocation
- PostCallback execution
- Response converter application
422-426:ErrResumableUploadInitis properly defined and exported. The error is declared ingenai.go(lines 35-36) as an exported sentinel error with appropriate documentation, and is correctly checked at line 423 inrouter.go.core/providers/gemini/files.go (2)
521-549: LGTM!The helper functions are well-implemented:
toGeminiFileStatecorrectly invertsToBifrostFileStatusfor round-trip compatibilityformatGeminiTimestampproperly handles the zero-value case and uses UTCsafeDerefInt64is a standard nil-safe dereference pattern
487-506: No action needed. TheToGeminiFileListResponsefunction correctly does not populate theURIfield becauseFileObjectitems in the list response do not contain URI information. UnlikeToGeminiFileUploadResponseandToGeminiFileRetrieveResponsewhich operate on full response objects withStorageURIfields, the list response contains only basicFileObjectmetadata without storage URI data.transports/bifrost-http/integrations/genai.go (4)
664-778: LGTM! Well-structured parameter extraction with proper validation.The batch parameter extractors include appropriate validation (non-empty batch_id, proper format with "batches/" prefix) and handle provider defaults consistently.
780-806: LGTM! Clear request routing based on upload protocol.The function properly delegates to specialized parsers based on the upload protocol and content type, providing a clean separation of concerns.
1011-1113: LGTM! Proper session lifecycle management in phase 2.The phase 2 handler correctly retrieves session metadata, validates it, copies file content, and cleans up the session after extracting provider information. The session cleanup at line 1105 ensures no memory leaks.
1169-1245: LGTM! Consistent parameter extraction pattern across file operations.The file list/retrieve/delete parameter extractors follow the same provider extraction and validation pattern established in the batch operations, maintaining good consistency across the codebase.
core/providers/gemini/types.go (3)
1578-1589: LGTM! SDK-aligned job state constants for GenAI compatibility.The GeminiJobState constants are correctly separated from the internal GeminiBatchState constants, allowing proper translation between the Google GenAI SDK format and internal representations.
1603-1651: LGTM! Comprehensive SDK batch type definitions.The SDK batch types properly mirror the Google GenAI Python SDK format, providing clear request/response structures for batch create, list, retrieve, cancel, and delete operations. The separation of SDK types from internal types maintains a clean boundary between external API surface and internal implementation.
1656-1662: LGTM! GeminiFileUploadRequest properly supports resumable uploads.The additions of
MimeTypeandResumableSessionIDfields enable proper MIME type handling and multi-phase resumable upload tracking. Removing the Provider field (noted in the summary) reduces public coupling to the schemas package, which is a good architectural decision.
| func init() { | ||
| go func() { | ||
| ticker := time.NewTicker(10 * time.Minute) | ||
| for range ticker.C { | ||
| now := time.Now() | ||
| uploadSessions.Range(func(key, value interface{}) bool { | ||
| if session, ok := value.(*uploadSession); ok { | ||
| if now.Sub(session.CreatedAt) > time.Hour { | ||
| uploadSessions.Delete(key) | ||
| } | ||
| } | ||
| return true | ||
| }) | ||
| } | ||
| }() | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Provide a mechanism to stop the cleanup goroutine for graceful shutdown.
The cleanup goroutine runs indefinitely with no way to stop it. If the application needs to shut down gracefully, this goroutine will continue running and prevent clean termination.
Consider using a context or stop channel:
+var cleanupStop = make(chan struct{})
+
func init() {
go func() {
ticker := time.NewTicker(10 * time.Minute)
+ defer ticker.Stop()
for {
- for range ticker.C {
+ select {
+ case <-ticker.C:
now := time.Now()
uploadSessions.Range(func(key, value interface{}) bool {
if session, ok := value.(*uploadSession); ok {
if now.Sub(session.CreatedAt) > time.Hour {
uploadSessions.Delete(key)
}
}
return true
})
+ case <-cleanupStop:
+ return
}
}
}()
}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/genai.go around lines 52 to 67, the
cleanup goroutine spawned in init() runs forever with no shutdown signal; change
it to accept a cancellable signal (either a package-level stop channel or a
context.Context) and use a select between ticker.C and ctx.Done()/stopChan to
terminate; on shutdown close/ cancel the signal, stop the ticker with
ticker.Stop(), and return from the goroutine so the process can exit cleanly
(also add an exported Shutdown/Close function or wire the context from
application startup to call cancellation).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (9)
transports/bifrost-http/integrations/router.go (1)
868-893: Removefmt.Printfdebug logging from file request pathsThese
[DEBUG]fmt.Printfstatements should not remain in production request handling; they were already flagged in a prior review. Replace them with structured logging (g.logger.Debug(...)) if you still need diagnostics, or remove them entirely:
- Lines 868, 875, 879, 881, 883, 887, 890, 892
- Line 1001 (file response conversion error)
Also applies to: 1001-1001
transports/bifrost-http/integrations/genai.go (5)
45-49: Handlecrypto/rand.Readerrors ingenerateSessionID
rand.Readcan fail; ignoring its error may yield predictable session IDs, which is undesirable even for upload sessions. This was raised previously and still applies.Consider:
func generateSessionID() string { bytes := make([]byte, 16) - rand.Read(bytes) + if _, err := rand.Read(bytes); err != nil { + // Extremely rare; fall back to time-based ID rather than silently weakening randomness + now := time.Now().UnixNano() + return fmt.Sprintf("%d-%d", now, now) + } return hex.EncodeToString(bytes) }
867-934: Critical: multipart/related parsing still corrupts binary file content
parseGeminiMultipartRelatedstill converts the full body to a string and splits on string boundaries:
parts := strings.Split(string(body), delimiter)- Later, file content is assigned via
uploadReq.File = []byte(content)This path corrupts arbitrary binary payloads (null bytes, non‑UTF‑8 sequences) and was previously flagged as critical.
Operate on
[]bytethroughout, e.g.:- delimiter := "--" + boundary - parts := strings.Split(string(body), delimiter) + delimiter := []byte("--" + boundary) + parts := bytes.Split(body, delimiter) - // work with string `part` and `content` + for _, part := range parts { + part = bytes.TrimSpace(part) + if len(part) == 0 || bytes.Equal(part, []byte("--")) { + continue + } + // Find header/content separator as byte indexes, then: + // headers := part[:headerEnd] + // content := part[contentStart:] + // and pass `content` directly into metadata unmarshal or assign to uploadReq.File (copy if needed). + }This avoids string conversion and preserves exact binary content.
792-806: Remove remainingfmt.Printfdebug logging in GenAI integrationThere are many
[DEBUG]fmt.Printfcalls left in the Gemini file/resumable helpers (parsers, callbacks, status setters). These were previously flagged and should not remain in production:
- Around
parseGeminiFileUploadRequest/parseGeminiResumableUpload/parseGeminiResumableUploadPhase2- In
setResumableUploadFinalStatus- In
extractGeminiResumableUploadParamsandextractGeminiFileUploadParamsReplace them with the integration’s structured logger (e.g.,
logger.Debug(...)) if you need persistent diagnostics, or delete them outright, and drop any now-unusedfmtimports.Also applies to: 814-865, 1013-1063, 1072-1077, 1108-1109, 1125-1125, 1139-1147
1051-1057: Avoid hardcoding file purpose to"batch"in resumable phase‑2Phase‑2 resumable uploads unconditionally set
uploadReq.Purpose = "batch". That may be wrong if the same endpoint is used for non‑batch uploads (e.g., general files for prompting).Consider:
- Carrying the intended purpose in the phase‑1 metadata/session and reusing it here; or
- Allowing the client to specify purpose in metadata with validation and falling back to
"batch"only when none is provided.
1138-1145: Make resumable upload URL construction reverse‑proxy awareThe upload URL for phase‑2 is built from
ctx.IsTLS()andctx.Host()only:scheme := "http" if ctx.IsTLS() { scheme = "https" } host := string(ctx.Host()) uploadURL := fmt.Sprintf("%s://%s/...", scheme, host, ...)Behind a reverse proxy, this will produce internal host/scheme rather than the client‑visible one, breaking resumable uploads.
Read
X-Forwarded-ProtoandX-Forwarded-Hostfirst (falling back toctx.IsTLS()/ctx.Host()only if they’re unset), following the pattern used elsewhere in the HTTP layer, then constructuploadURLfrom those values.core/providers/gemini/batch.go (3)
870-896: Fix defensive stats computation to prevent negative counts.The BatchStats math at lines 887-891 can produce negative
PendingRequestCountorSuccessfulRequestCountwhenresp.RequestCounts.Totalis zero butCompletedorFailedare non-zero (e.g., fromBatchRetrievewhich leavesTotalat zero).Apply the defensive totals pattern suggested in the previous review to ensure all counts remain non-negative.
- result.Metadata = &GeminiBatchMetadata{ - Name: resp.ID, - State: ToGeminiJobState(resp.Status), - CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339), - BatchStats: &GeminiBatchStats{ - RequestCount: resp.RequestCounts.Total, - PendingRequestCount: resp.RequestCounts.Total - resp.RequestCounts.Completed, - SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed, - }, - } + total := resp.RequestCounts.Total + if total == 0 { + total = resp.RequestCounts.Completed + resp.RequestCounts.Failed + } + success := resp.RequestCounts.Completed - resp.RequestCounts.Failed + if success < 0 { + success = 0 + } + pending := total - resp.RequestCounts.Completed - resp.RequestCounts.Failed + if pending < 0 { + pending = 0 + } + + result.Metadata = &GeminiBatchMetadata{ + Name: resp.ID, + State: ToGeminiJobState(resp.Status), + CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339), + BatchStats: &GeminiBatchStats{ + RequestCount: total, + PendingRequestCount: pending, + SuccessfulRequestCount: success, + }, + }Based on learnings from previous reviews.
898-933: Apply the same defensive stats fix here.Lines 914-918 have the identical negative count issue. Use the same defensive computation pattern to derive
total,success, andpendingwith bounds checks.Based on learnings from previous reviews.
935-972: Apply the same defensive stats fix in the batch loop.Lines 953-957 within the loop have the same issue. Each batch item needs the defensive totals computation to avoid negative counts.
Based on learnings from previous reviews.
🧹 Nitpick comments (2)
transports/bifrost-http/integrations/router.go (1)
80-89: BatchDelete wiring through router and batch handler is coherentAdding
DeleteRequesttoBatchRequest, theBatchDeleteResponseConvertertype/field onRouteConfig, and theschemas.BatchDeleteRequestcase inhandleBatchRequestcollectively mirror existing batch operations and correctly delegate toBatchDeleteRequeston the client.The
BatchCreateRequestConverterfield now serves as a generic batch converter for all batch types; consider renaming it toBatchRequestConverterin a follow-up to better reflect its broader role.Also applies to: 159-162, 291-291, 820-841
transports/bifrost-http/integrations/genai.go (1)
438-661: Gemini batch route configs integrate cleanly with Bifrost batch APIsThe new batch routes:
- Map Gemini SDK batch structs (create/list/retrieve/cancel/delete) to
BifrostBatch*Requesttypes, including handling ofsrc(file vs inline) andpageSize/pageToken.- Use PreCallbacks to derive provider from headers and normalize
batch_idpath params (batches/{id}, stripping:cancelwhere needed).- Respect RawResponse passthrough and otherwise convert via
gemini.ToGemini*helpers.This batch plumbing looks correct and consistent with the rest of the router.
BatchCreateRequestConverteronRouteConfigis now used as a generic batch converter for all batch operations; consider renaming it (and associated comments) to something likeBatchRequestConverterin a later cleanup for clarity.Also applies to: 663-778
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (25)
core/bifrost.go(1 hunks)core/providers/anthropic/batch.go(1 hunks)core/providers/azure/batch.go(1 hunks)core/providers/bedrock/batch.go(1 hunks)core/providers/cerebras/batch.go(1 hunks)core/providers/cohere/batch.go(1 hunks)core/providers/elevenlabs/batch.go(1 hunks)core/providers/gemini/batch.go(1 hunks)core/providers/gemini/files.go(1 hunks)core/providers/gemini/types.go(1 hunks)core/providers/groq/batch.go(1 hunks)core/providers/mistral/batch.go(1 hunks)core/providers/ollama/batch.go(1 hunks)core/providers/openai/batch.go(1 hunks)core/providers/openrouter/batch.go(1 hunks)core/providers/parasail/batch.go(1 hunks)core/providers/perplexity/batch.go(1 hunks)core/providers/sgl/batch.go(1 hunks)core/providers/vertex/batch.go(1 hunks)core/schemas/batch.go(1 hunks)core/schemas/bifrost.go(1 hunks)core/schemas/provider.go(3 hunks)transports/bifrost-http/integrations/genai.go(14 hunks)transports/bifrost-http/integrations/router.go(9 hunks)transports/bifrost-http/integrations/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (9)
- core/providers/bedrock/batch.go
- transports/bifrost-http/integrations/utils.go
- core/providers/openai/batch.go
- core/providers/groq/batch.go
- core/providers/elevenlabs/batch.go
- core/providers/openrouter/batch.go
- core/providers/anthropic/batch.go
- core/schemas/batch.go
- core/providers/perplexity/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/providers/mistral/batch.gocore/providers/vertex/batch.gocore/schemas/provider.gocore/schemas/bifrost.gocore/providers/azure/batch.gocore/providers/cerebras/batch.gocore/providers/parasail/batch.gocore/providers/gemini/files.gotransports/bifrost-http/integrations/genai.gocore/providers/gemini/batch.gocore/providers/cohere/batch.gocore/bifrost.gocore/providers/sgl/batch.gotransports/bifrost-http/integrations/router.gocore/providers/ollama/batch.gocore/providers/gemini/types.go
🧬 Code graph analysis (11)
core/providers/vertex/batch.go (4)
core/providers/vertex/vertex.go (1)
VertexProvider(57-62)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/schemas/provider.go (2)
core/schemas/bifrost.go (1)
BatchDeleteRequest(103-103)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)
core/schemas/bifrost.go (1)
ui/lib/types/config.ts (1)
RequestType(122-134)
core/providers/azure/batch.go (5)
core/providers/azure/azure.go (1)
AzureProvider(22-27)core/schemas/account.go (1)
Key(8-17)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/cerebras/batch.go (4)
core/providers/cerebras/cerebras.go (1)
CerebrasProvider(16-21)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/gemini/files.go (1)
core/schemas/files.go (1)
BifrostFileUploadResponse(89-105)
transports/bifrost-http/integrations/genai.go (3)
core/schemas/provider.go (1)
Provider(313-362)core/schemas/files.go (5)
BifrostFileUploadRequest(53-66)BifrostFileUploadResponse(89-105)BifrostFileListRequest(108-124)BifrostFileRetrieveRequest(136-145)BifrostFileDeleteRequest(167-176)core/schemas/models.go (1)
Model(109-129)
core/bifrost.go (3)
core/schemas/bifrost.go (3)
BatchDeleteRequest(103-103)BifrostError(364-373)RequestType(83-83)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/provider.go (1)
Provider(313-362)
core/providers/sgl/batch.go (4)
core/providers/sgl/sgl.go (1)
SGLProvider(18-23)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
transports/bifrost-http/integrations/router.go (4)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)transports/bifrost-http/integrations/genai.go (1)
ErrResumableUploadInit(36-36)core/schemas/bifrost.go (2)
BatchDeleteRequest(103-103)FileUploadRequest(104-104)core/schemas/provider.go (1)
Provider(313-362)
core/providers/ollama/batch.go (4)
core/providers/ollama/ollama.go (1)
OllamaProvider(18-23)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Graphite / mergeability_check
🔇 Additional comments (25)
core/schemas/bifrost.go (1)
103-103: LGTM!The
BatchDeleteRequestconstant addition follows the established pattern and is positioned logically among other batch operation types.core/schemas/provider.go (3)
182-182: LGTM!The
BatchDeletefield addition toAllowedRequestsfollows the established pattern for other batch operations.
231-232: LGTM!The
BatchDeleteRequestcase inIsOperationAllowedcorrectly returns theBatchDeleteflag, maintaining consistency with other batch operation checks.
350-351: LGTM!The
BatchDeletemethod addition to theProviderinterface follows the established signature pattern, ensuring all providers implement batch delete support.core/providers/vertex/batch.go (1)
35-38: LGTM!The
BatchDeletestub implementation correctly returns an unsupported operation error, consistent with other batch operations in Vertex provider.core/providers/azure/batch.go (1)
438-441: LGTM!The
BatchDeletestub implementation follows the established pattern for unsupported operations in Azure provider.core/providers/mistral/batch.go (1)
35-38: LGTM!The
BatchDeletestub implementation correctly returns an unsupported operation error, matching the pattern used for other batch operations.core/providers/cohere/batch.go (1)
35-38: LGTM!The
BatchDeletestub implementation follows the established unsupported operation pattern for Cohere provider.core/providers/cerebras/batch.go (1)
35-38: LGTM!The
BatchDeletestub implementation is consistent with other unsupported batch operations in Cerebras provider.core/providers/ollama/batch.go (1)
35-38: LGTM!The
BatchDeletestub implementation correctly follows the unsupported operation pattern for Ollama provider.core/providers/sgl/batch.go (1)
35-38: SGL BatchDelete stub matches existing unsupported-operation patternSignature and error wiring are consistent with other SGL batch methods and the Provider interface; no issues.
core/providers/parasail/batch.go (1)
35-38: Parasail BatchDelete stub is correctly wired as unsupportedImplementation cleanly returns the standardized unsupported-operation error and matches the Provider interface.
core/bifrost.go (1)
1133-1209: BatchDeleteRequest implementation is consistent with other batch APIsValidation, config resolution, key selection, retry wiring, and ExtraFields population mirror BatchRetrieve/BatchCancel/BatchResults; behavior looks correct.
core/providers/gemini/files.go (1)
466-485: Typed Gemini upload wrapper looks correctWrapper shape (
{"file": {...}}) and field mapping fromBifrostFileUploadResponsetoGeminiFileResponseare coherent; timestamp/status helpers are reused correctly.transports/bifrost-http/integrations/router.go (1)
421-426: Sentinel handling for resumable-upload init is correctly short‑circuitingTreating
ErrResumableUploadInitas a special case and returning without sending an additional error/response cleanly supports PreCallback‑handled resumable init flows.transports/bifrost-http/integrations/genai.go (4)
78-132: GenAI chat/embedding/speech/transcription routing and converters look correctThe main
/v1beta/models/{model:*}route correctly branches GeminiGenerationRequest into embedding/chat/speech/transcription Bifrost requests and uses the appropriateToGemini*converters plus streaming config; no functional concerns.
165-205: Gemini file routing (upload, resumable, list, retrieve, delete) is coherently mapped to Bifrost
- File routes now use Gemini SDK request types and convert to the corresponding
BifrostFile*Requestwith provider taken from context/header.- Resumable POST/PUT routes share a consistent phase‑2 parser, converter, and post‑callback, and correctly reuse the same file upload machinery.
- List/retrieve/delete routes extract IDs/query params and map them cleanly into Bifrost requests, with RawResponse passthrough when present.
Overall, the file API surface looks consistent and aligns well with the core schema types.
Also applies to: 207-314, 318-355, 357-433
1171-1177: Provider and path‑param extraction helpers for file list/retrieve/delete look good
extractGeminiFileListQueryParams,extractGeminiFileRetrieveParams, andextractGeminiFileDeleteParams:
- Default provider to Gemini or take it from
x-model-provider.- Safely parse
pageSize/pageTokenandfile_idfrom query/path.- Populate the Gemini SDK request structs used by the converters and set provider in the Bifrost context.
These helpers align with the rest of the integration’s parameter handling.
Also applies to: 1195-1219, 1221-1245
1249-1252: Batch route registration is correctly hooked into GenAI routerIncluding
CreateGenAIBatchRouteConfigs("/genai", handlerStore)alongside the existing GenAI and file routes cleanly wires the new batch endpoints into the GenAI router.core/providers/gemini/batch.go (3)
846-868: LGTM! Status mapping is comprehensive.The conversion logic correctly maps all Bifrost batch statuses to their Gemini SDK equivalents, with a sensible default fallback.
974-984: LGTM! Simple and correct converter.This function appropriately converts only the essential fields without complex computations.
986-1043: LGTM! BatchDelete implementation follows established patterns.The method correctly:
- Validates required fields
- Handles both batch ID formats (with/without "batches/" prefix)
- Accepts appropriate HTTP status codes for DELETE (200 and 204)
- Returns proper response with latency metadata
core/providers/gemini/types.go (3)
1578-1589: LGTM! SDK job state constants are well-defined.The constants correctly distinguish SDK job states from internal batch states, with clear naming and appropriate values.
1591-1651: LGTM! SDK batch types are correctly structured.The type definitions appropriately model the Google GenAI SDK batch API surface. The
Src interface{}field inGeminiBatchCreateRequestSDKis correctly typed to accept both string (file reference) and array (inline requests) formats.Note: The past review comment about Src field validation applies to the transport layer handling (transports/bifrost-http/integrations/genai.go), not these type definitions.
1655-1662: LGTM! File upload request properly updated for resumable uploads.The changes appropriately:
- Add
MimeTypefield for explicit MIME type handling- Add internal
ResumableSessionIDfield for resumable upload session tracking- Remove
Providerfield to simplify the public APIThese modifications align with the resumable upload functionality mentioned in the PR objectives.
aa57668 to
23804b1
Compare
9e1cf19 to
53d30ec
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (9)
transports/bifrost-http/integrations/genai.go (6)
51-67: Consider adding a shutdown signal for the cleanup goroutineThe
init-spawned cleanup goroutine runs indefinitely with no way to stop it, which can complicate graceful shutdown. If your process supports coordinated shutdown, consider wiring a stop channel or context into this ticker loop so it can exit cleanly.
23-49: Handlecrypto/rand.Readerrors when generating session IDs
generateSessionIDignores the error fromrand.Read, which is rare but can lead to weak or predictable IDs if the randomness source fails.Consider handling the error explicitly, e.g.:
func generateSessionID() string { bytes := make([]byte, 16) - rand.Read(bytes) - return hex.EncodeToString(bytes) + if _, err := rand.Read(bytes); err != nil { + // Extremely rare; fall back to a time-based ID rather than silently using zeroed bytes + return fmt.Sprintf("%d-%d", time.Now().UnixNano(), time.Now().UnixNano()) + } + return hex.EncodeToString(bytes) }
438-661: Batch route configuration is structurally sound but needs stricterSrcvalidationThe new
CreateGenAIBatchRouteConfigscorrectly wires Gemini SDK batch routes (create/list/retrieve/cancel/delete) to Bifrost batch requests and response converters, using PreCallbacks to derive provider, model, and batch IDs.However, in the create converter,
Srchandling silently ignores unsupported types and non-map array items:switch src := sdkReq.Src.(type) { case string: // ok case []interface{}: for i, item := range src { if itemMap, ok := item.(map[string]interface{}); ok { // ok } // invalid items silently skipped } // other types fall through with empty Requests/InputFileID }For invalid
Srcinputs this flows into core, which later returns a generic “either input_file_id or requests must be provided”.It would be better to validate at the transport layer and return a clear 400, e.g.:
- switch src := sdkReq.Src.(type) { + switch src := sdkReq.Src.(type) { case string: bifrostReq.InputFileID = strings.TrimPrefix(src, "files/") case []interface{}: requests := make([]schemas.BatchRequestItem, 0, len(src)) for i, item := range src { - if itemMap, ok := item.(map[string]interface{}); ok { - customID := fmt.Sprintf("request-%d", i) - requests = append(requests, schemas.BatchRequestItem{ - CustomID: customID, - Body: itemMap, - }) - } + itemMap, ok := item.(map[string]interface{}) + if !ok { + return nil, fmt.Errorf("src[%d] must be an object", i) + } + customID := fmt.Sprintf("request-%d", i) + requests = append(requests, schemas.BatchRequestItem{ + CustomID: customID, + Body: itemMap, + }) } bifrostReq.Requests = requests + default: + return nil, fmt.Errorf("src must be a string (file reference) or array of request objects") }so clients receive specific feedback when
srchas the wrong shape.
1011-1063: HardcodedPurposemay be too narrow for resumable uploadsIn
parseGeminiResumableUploadPhase2,uploadReq.Purposeis always set to"batch". If this resumable pathway is ever reused for non-batch uploads (e.g., general file storage for other Gemini features), this hard-coding will be incorrect.Consider either:
- Passing the purpose from phase-1 metadata and storing it in the
uploadSession, then using that here, or- At least making
"batch"a default only when no explicit purpose is known.- uploadReq.Filename = session.Filename - uploadReq.MimeType = session.MimeType - uploadReq.Purpose = "batch" // Default purpose for file uploads via GenAI API + uploadReq.Filename = session.Filename + uploadReq.MimeType = session.MimeType + if session.Purpose != "" { + uploadReq.Purpose = session.Purpose + } else { + uploadReq.Purpose = "batch" + }(with
Purposeadded touploadSessionif needed).
1115-1163: Support X-Forwarded headers when building resumable upload URL
extractGeminiFileUploadParamsbuilds the phase-2 upload URL usingctx.IsTLS()andctx.Host()only:scheme := "http" if ctx.IsTLS() { scheme = "https" } host := string(ctx.Host()) uploadURL := fmt.Sprintf("%s://%s/genai/upload/v1beta/files/resumable/%s", scheme, host, uploadReq.ResumableSessionID)Behind a reverse proxy, this can yield an internal host/scheme instead of the client-facing address, breaking the client’s follow-up upload call.
Consider honoring
X-Forwarded-ProtoandX-Forwarded-Hostfirst, falling back toctx.IsTLS()/ctx.Host():- scheme := "http" - if ctx.IsTLS() { - scheme = "https" - } - host := string(ctx.Host()) + scheme := "http" + if xfProto := strings.ToLower(string(ctx.Request.Header.Peek("X-Forwarded-Proto"))); strings.Contains(xfProto, "https") { + scheme = "https" + } else if ctx.IsTLS() { + scheme = "https" + } + + host := string(ctx.Request.Header.Peek("X-Forwarded-Host")) + if host == "" { + host = string(ctx.Host()) + }so the generated URL works correctly when Bifrost is deployed behind a proxy.
780-793: Remove or replace[DEBUG]fmt.Printflogging with structured loggerThere are numerous
fmt.Printfcalls in the new resumable/file/batch plumbing (e.g.,parseGeminiFileUploadRequest,parseGeminiResumableUpload,parseGeminiResumableUploadPhase2,setResumableUploadFinalStatus,extractGeminiResumableUploadParams,extractGeminiFileUploadParams) that:
- Print paths, session IDs, providers, filenames, and sometimes full response bodies.
- Bypass the existing
loggerabstraction.- Risk leaking sensitive data to stdout in production.
These should be removed or migrated to
logger.Debug(...)with carefully chosen, non-sensitive fields. For example:- fmt.Printf("[DEBUG] parseGeminiResumableUploadPhase2: bodyLen=%d, filename=%s, provider=%s\n", len(body), session.Filename, session.Provider) + // logger.Debug("parseGeminiResumableUploadPhase2", "bodyLen", len(body), "filename", session.Filename, "provider", session.Provider)and similar for the other debug statements.
Also applies to: 814-815, 824-836, 841-842, 860-862, 1013-1014, 1046-1047, 1071-1077, 1088-1109, 1125-1130, 1147-1157
transports/bifrost-http/integrations/utils.go (1)
194-197: Remove debug log that prints full response bodyThe
fmt.PrintfinsendSuccesslogs the entire response payload, which can expose sensitive data (PII, API keys, tokens) and should not be present in production. The explicitContent-Lengthheader is redundant becauseSetBodyalready sets it.Recommend removing the debug line (or replacing it with structured logging that omits bodies):
- ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody))) - ctx.SetBody(responseBody) - fmt.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d, body=%s\n", len(responseBody), string(responseBody)) + ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody))) + ctx.SetBody(responseBody)transports/bifrost-http/integrations/router.go (1)
868-893: Remove file-path debugfmt.PrintfstatementsThe
[DEBUG]fmt.Printfcalls inhandleFileRequest(logging provider, purpose, filenames, errors, and conversion failures) bypass the structured logger and may leak sensitive file metadata or error details. They should not remain in production.Recommend removing them or switching to
g.logger.Debug(...)with sanitized fields:- fmt.Printf("[DEBUG] router: calling FileUploadRequest for provider=%s, purpose=%s, filename=%s\n", fileReq.UploadRequest.Provider, fileReq.UploadRequest.Purpose, fileReq.UploadRequest.Filename) ... - fmt.Printf("[DEBUG] router: FileUploadRequest error: %s (provider=%s)\n", errMsg, fileReq.UploadRequest.Provider) ... - fmt.Printf("[DEBUG] router: FileUploadRequest success, response ID=%s\n", fileResponse.ID) ... - fmt.Printf("[DEBUG] router: calling PostCallback\n") ... - fmt.Printf("[DEBUG] router: PostCallback error: %v\n", err) ... - fmt.Printf("[DEBUG] router: PostCallback success\n") ... - fmt.Printf("[DEBUG] router: calling FileUploadResponseConverter\n") ... - fmt.Printf("[DEBUG] router: FileUploadResponseConverter done, err=%v\n", err) ... - if err != nil { - fmt.Printf("[DEBUG] router: file response conversion error: %v\n", err) + if err != nil {Also applies to: 1000-1001
core/providers/gemini/batch.go (1)
846-972: Fix BatchStats math in SDK converters to avoid negative pending/success counts
ToGeminiBatchJobResponse,ToGeminiBatchRetrieveResponse, andToGeminiBatchListResponsecurrently compute:
RequestCount = resp.RequestCounts.TotalPendingRequestCount = resp.RequestCounts.Total - resp.RequestCounts.CompletedSuccessfulRequestCount = resp.RequestCounts.Completed - resp.RequestCounts.FailedWhen
Totalis left at zero (as inBatchRetrieve, where onlyCompleted/Failedare populated), this can produce negative pending counts and inconsistent totals.Derive totals defensively and clamp pending to non-negative, e.g.:
- result := &GeminiBatchJobResponseSDK{ - Name: resp.ID, - State: ToGeminiJobState(resp.Status), - } - - // Add metadata if available - if resp.CreatedAt > 0 { - result.Metadata = &GeminiBatchMetadata{ - Name: resp.ID, - State: ToGeminiJobState(resp.Status), - CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339), - BatchStats: &GeminiBatchStats{ - RequestCount: resp.RequestCounts.Total, - PendingRequestCount: resp.RequestCounts.Total - resp.RequestCounts.Completed, - SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed, - }, - } - } + result := &GeminiBatchJobResponseSDK{ + Name: resp.ID, + State: ToGeminiJobState(resp.Status), + } + + if resp.CreatedAt > 0 { + total := resp.RequestCounts.Total + if total == 0 { + // Derive total from known components when not explicitly set + total = resp.RequestCounts.Completed + resp.RequestCounts.Failed + } + success := resp.RequestCounts.Completed - resp.RequestCounts.Failed + if success < 0 { + success = 0 + } + pending := total - (success + resp.RequestCounts.Failed) + if pending < 0 { + pending = 0 + } + + result.Metadata = &GeminiBatchMetadata{ + Name: resp.ID, + State: ToGeminiJobState(resp.Status), + CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339), + BatchStats: &GeminiBatchStats{ + RequestCount: total, + PendingRequestCount: pending, + SuccessfulRequestCount: success, + }, + } + }and mirror the same pattern in
ToGeminiBatchRetrieveResponseand inside the loop inToGeminiBatchListResponsewhen populatingBatchStats, so invariants likeRequestCount ≥ 0andPendingRequestCount ≥ 0always hold.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (25)
core/bifrost.go(1 hunks)core/providers/anthropic/batch.go(1 hunks)core/providers/azure/batch.go(1 hunks)core/providers/bedrock/batch.go(1 hunks)core/providers/cerebras/batch.go(1 hunks)core/providers/cohere/batch.go(1 hunks)core/providers/elevenlabs/batch.go(1 hunks)core/providers/gemini/batch.go(1 hunks)core/providers/gemini/files.go(1 hunks)core/providers/gemini/types.go(1 hunks)core/providers/groq/batch.go(1 hunks)core/providers/mistral/batch.go(1 hunks)core/providers/ollama/batch.go(1 hunks)core/providers/openai/batch.go(1 hunks)core/providers/openrouter/batch.go(1 hunks)core/providers/parasail/batch.go(1 hunks)core/providers/perplexity/batch.go(1 hunks)core/providers/sgl/batch.go(1 hunks)core/providers/vertex/batch.go(1 hunks)core/schemas/batch.go(1 hunks)core/schemas/bifrost.go(1 hunks)core/schemas/provider.go(3 hunks)transports/bifrost-http/integrations/genai.go(14 hunks)transports/bifrost-http/integrations/router.go(9 hunks)transports/bifrost-http/integrations/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (10)
- core/providers/parasail/batch.go
- core/providers/mistral/batch.go
- core/schemas/batch.go
- core/bifrost.go
- core/providers/vertex/batch.go
- core/providers/cohere/batch.go
- core/providers/anthropic/batch.go
- core/schemas/bifrost.go
- core/providers/groq/batch.go
- core/schemas/provider.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/providers/openrouter/batch.gocore/providers/perplexity/batch.gocore/providers/sgl/batch.gocore/providers/azure/batch.gocore/providers/bedrock/batch.gocore/providers/cerebras/batch.gocore/providers/elevenlabs/batch.gocore/providers/openai/batch.gotransports/bifrost-http/integrations/router.gocore/providers/gemini/types.gocore/providers/gemini/files.gotransports/bifrost-http/integrations/genai.gocore/providers/gemini/batch.gotransports/bifrost-http/integrations/utils.gocore/providers/ollama/batch.go
🧬 Code graph analysis (11)
core/providers/openrouter/batch.go (5)
core/providers/openrouter/openrouter.go (1)
OpenRouterProvider(18-23)core/schemas/account.go (1)
Key(8-17)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/perplexity/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/sgl/batch.go (2)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)
core/providers/azure/batch.go (4)
core/providers/azure/azure.go (1)
AzureProvider(22-27)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/bedrock/batch.go (4)
core/providers/bedrock/bedrock.go (1)
BedrockProvider(28-34)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/cerebras/batch.go (3)
core/providers/cerebras/cerebras.go (1)
CerebrasProvider(16-21)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)
core/providers/openai/batch.go (4)
core/providers/openai/openai.go (1)
OpenAIProvider(24-30)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
transports/bifrost-http/integrations/router.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)transports/bifrost-http/integrations/genai.go (1)
ErrResumableUploadInit(36-36)core/schemas/bifrost.go (1)
BatchDeleteRequest(103-103)
core/providers/gemini/files.go (1)
core/schemas/files.go (1)
BifrostFileUploadResponse(89-105)
transports/bifrost-http/integrations/genai.go (4)
core/schemas/provider.go (1)
Provider(313-362)core/schemas/bifrost.go (4)
ModelProvider(32-32)Gemini(48-48)FileUploadRequest(104-104)BatchCreateRequest(98-98)core/providers/gemini/types.go (3)
GeminiFileUploadRequest(1656-1662)Type(779-779)GeminiBatchCreateRequestSDK(1596-1602)core/schemas/batch.go (2)
BifrostBatchCreateRequest(65-82)BatchRequestItem(31-37)
core/providers/ollama/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
🔇 Additional comments (22)
core/providers/bedrock/batch.go (1)
931-935: Unsupported BatchDelete stub looks correct and consistent with the stack.The
BatchDeletemethod cleanly returns a structured unsupported-operation error keyed byschemas.BatchDeleteRequestandprovider.GetProviderKey(), matching the new BatchDelete flow and the pattern used for other non‑Gemini providers in this stack. No additional wiring or validation is needed here until Bedrock gains native batch delete support.core/providers/perplexity/batch.go (1)
35-38: LGTM! Consistent stub implementation.The
BatchDeletemethod correctly follows the established pattern for unsupported batch operations in the Perplexity provider, matching the approach used by the other five batch methods in this file.core/providers/elevenlabs/batch.go (1)
35-38: LGTM! Consistent stub implementation.The BatchDelete method correctly follows the established pattern for unsupported operations in the Elevenlabs provider, matching the implementation style of the other five batch operation methods.
core/providers/ollama/batch.go (1)
35-38: BatchDelete stub correctly mirrors other unsupported batch operationsThe
BatchDeleteimplementation cleanly matches the existingBatch*unsupported stubs (same error helper, correctBatchDeleteRequesttype, provider key usage), satisfying the updated provider interface without changing Ollama behavior. No issues from a correctness or integration perspective.core/providers/sgl/batch.go (1)
35-38: LGTM! Consistent implementation following established pattern.The
BatchDeletemethod correctly follows the same pattern as the other five Batch* methods in this file, appropriately returning an unsupported operation error for the SGL provider.core/providers/azure/batch.go (1)
438-441: LGTM! Stub implementation follows the established pattern.The
BatchDeletestub correctly returns an unsupported operation error, consistent with the PR's objective to provide primary BatchDelete support for Gemini while adding interface stubs for other providers. The implementation aligns with Azure's other batch operation patterns in this file.core/providers/cerebras/batch.go (1)
35-38: LGTM! Consistent stub implementation.The
BatchDeletemethod correctly follows the established pattern of other Batch* stubs in this file. The implementation appropriately returns an unsupported operation error using the correct request type constant and provider key.core/providers/openai/batch.go (1)
597-600: Implementation is correct and follows established pattern for unsupported operations.The stub correctly returns an unsupported operation error without a
CheckOperationAllowedcheck. This pattern is consistent across all unsupported operation stubs in the codebase (vertex, sgl, parasail, openrouter, and perplexity providers all use the same approach), and OpenAI's Batch API does not provide a delete endpoint. The implementation is appropriate.core/providers/gemini/files.go (1)
466-485: Gemini upload response wrapper looks correct and SDK-alignedThe
GeminiFileUploadResponseWrapperand updatedToGeminiFileUploadResponsecleanly wrap the file object under"file"and map the core fields as expected; no functional issues spotted.core/providers/gemini/types.go (2)
1568-1651: SDK batch/job state and request/response types are well-shapedThe added Gemini job-state constants and SDK-facing batch structs (
GeminiBatchCreateRequestSDK,GeminiBatchJobResponseSDK, etc.) match the GenAI SDK surface and integrate cleanly with the new converters ingemini/batch.go.
1655-1679: File request structs align with new file routing
GeminiFileUploadRequestand the list/retrieve/delete request types line up with the GenAI file routes and provider conversions ingenai.go; structure and JSON tags look correct.core/providers/openrouter/batch.go (1)
35-38: OpenRouter BatchDelete stub is consistent with other unsupported batch methodsThe new
BatchDeleteimplementation correctly returnsNewUnsupportedOperationErrorand matches the pattern used by the other OpenRouter batch methods.transports/bifrost-http/integrations/router.go (3)
80-89: BatchDelete wiring into batch request/route config looks consistentExtending
BatchRequestwithDeleteRequestand addingBatchDeleteResponseConverterinRouteConfigcleanly integrates batch delete into the existing batch plumbing; naming and usage are consistent with the other batch fields.Also applies to: 159-162, 271-292
421-426: Resumable-upload sentinel handling in PreCallback is safeSpecial-casing
ErrResumableUploadInitto return early fromcreateHandlercorrectly skips Bifrost execution when the PreCallback has already written the HTTP response, without impacting normal error handling for other cases.
820-841: BatchDelete handler mirrors other batch operations correctlyThe new
schemas.BatchDeleteRequestbranch inhandleBatchRequestvalidatesDeleteRequest, callsBatchDeleteRequeston the client, runsPostCallback, and usesBatchDeleteResponseConverterwhen present. This matches the pattern for the other batch operations and should behave as expected.core/providers/gemini/batch.go (1)
986-1043: Gemini BatchDelete implementation is correct and matches other batch methods
BatchDeletevalidatesbatch_id, builds the proper/batches/{id}DELETE URL (handling both bare IDs andbatches/…), sends the request withx-goog-api-key, and returns a well-formedBifrostBatchDeleteResponseon 200/204. This is consistent with the other Gemini batch operations.transports/bifrost-http/integrations/genai.go (6)
165-205: GenAI file route configs and conversions look coherentThe file routes (
/upload/v1beta/files,/v1beta/files,/v1beta/files/{file_id}with GET/DELETE) correctly:
- Use Gemini-specific request types (
GeminiFileUploadRequest,GeminiFileListRequest, etc.).- Convert to the corresponding Bifrost file requests with provider derived from context.
- Use appropriate PreCallbacks to populate provider, query params, and path params.
This wiring matches the Gemini provider’s file API expectations.
Also applies to: 322-337, 357-433
780-806: Multipart and raw upload parsers are reasonable
parseGeminiFileUploadRequestcorrectly dispatches between resumable/multipart/raw paths based on headers, whileparseGeminiMultipartUploadandparseGeminiRawUploadassembleGeminiFileUploadRequestwith file bytes and best-effort filename extraction. The overall parsing strategy looks sound given fasthttp’s primitives.Also applies to: 936-1009
808-865: Resumable upload parsing is functionally correct; be aware of multipart/related edge cases
parseGeminiResumableUploadandparseGeminiMultipartRelatedcorrectly distinguish:
- JSON-only phase-1 metadata (storing session with display name, MIME type, size).
- Multipart/related with metadata + file content.
- Raw body uploads as a fall-back.
While the current
multipart/relatedparsing via manual boundary splitting is a bit low-level, it is functionally consistent; if you later see interoperability issues with less standard clients, consider moving to a dedicated multipart parser for robustness.Also applies to: 867-934
1081-1113: Resumable phase-2 PreCallback correctly derives provider and original filename
extractGeminiResumableUploadParams:
- Validates
session_id.- Loads provider and original filename from the stored session.
- Deletes the session once data is extracted.
- Stores provider and filename in the Bifrost context for downstream converters.
This is a good separation of concerns for provider/filename propagation.
1195-1245: File retrieve/delete param extractors are straightforward and correct
extractGeminiFileRetrieveParamsandextractGeminiFileDeleteParams:
- Normalize provider from
x-model-provider(defaulting to Gemini).- Validate and extract
file_idfrom the path.- Populate the appropriate Gemini request structs.
This is the right place to centralize that wiring.
1247-1252: Batch routes correctly integrated into GenAI routerAdding
CreateGenAIBatchRouteConfigs("/genai", handlerStore)toNewGenAIRouterensures the new batch endpoints are registered alongside the existing GenAI and file routes without changing their behavior.
| // setResumableUploadFinalStatus sets the X-Goog-Upload-Status header to "final" for phase 2 responses | ||
| func setResumableUploadFinalStatus(ctx *fasthttp.RequestCtx, req interface{}, resp interface{}) error { | ||
| // Set the upload status to final to signal completion of resumable upload | ||
| ctx.Response.Header.Set("X-Goog-Upload-Status", "final") | ||
|
|
||
| // Log the response for debugging | ||
| respJSON, _ := sonic.Marshal(resp) | ||
| fmt.Printf("[DEBUG] setResumableUploadFinalStatus: set X-Goog-Upload-Status=final, response body=%s\n", string(respJSON)) | ||
|
|
||
| // Also log the full response headers for debugging | ||
| fmt.Printf("[DEBUG] setResumableUploadFinalStatus: status code=%d\n", ctx.Response.StatusCode()) | ||
|
|
||
| return nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Post-callback helper is fine; consider dropping debug prints if not needed
setResumableUploadFinalStatus correctly sets X-Goog-Upload-Status: final for phase-2 responses. The debug logging of full response JSON and status code is useful during development but should be dropped or moved to the structured logger before production.
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/genai.go around lines 1066 to 1079, drop
the two fmt.Printf debug prints (and the sonic.Marshal call) or replace them
with the project's structured logger at debug level; specifically remove
marshaling of the entire response for performance, and either call the existing
logger (e.g., logger.Debugf/Debug) to log a concise message including status or
completely remove the debug logging for production. Ensure no unused imports
remain after removing fmt/sonic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (14)
transports/bifrost-http/integrations/utils.go (1)
196-196: Remove debug log that exposes full response body.This debug log prints the complete response body, which may contain sensitive data (PII, tokens, credentials). This should be removed before merging.
ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody))) ctx.SetBody(responseBody) - fmt.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d, body=%s\n", len(responseBody), string(responseBody)) }transports/bifrost-http/integrations/genai.go (9)
52-67: Cleanup goroutine runs forever with no shutdown signal.The
initgoroutine loops onticker.Cwith no way to stop, and the ticker is never closed. This can block graceful shutdown or leak goroutines in tests. Consider adding a package‑level stop channel or context plusdefer ticker.Stop()so the owning process can terminate the cleaner cleanly.
780-865: Resumable upload phase‑1 metadata handling is solid but logs should not usefmt.Printf.The resumable handler:
- Correctly branches on
multipart/relatedvs JSON metadata vs raw bytes.- Parses snake_case metadata into
GeminiFileUploadRequest.- Creates and stores an
uploadSessionwith filename/MIME/size, and associates aResumableSessionID.However, multiple
[DEBUG]fmt.Printfcalls (content type, body length, parsed metadata, session ID, raw content) should not be left in production. Either:
- Remove these lines entirely, or
- Replace with the structured logger used elsewhere (e.g.,
logger.Debugfin this package).This also applies to similar debug prints in nearby functions.
1011-1063: Resumable phase‑2 parser is mostly correct but hardcodes purpose to"batch".
parseGeminiResumableUploadPhase2:
- Validates
session_idfrom the path.- Loads the session, ensuring it exists and has valid type.
- Copies the body into
uploadReq.Fileand propagates filename/MIME from the session.- Stores
ResumableSessionIDfor later PreCallback use.But it unconditionally sets
uploadReq.Purpose = "batch". If this resumable endpoint is or will be used for non‑batch uploads (e.g., general file storage for chat context), hardcoding the purpose could be incorrect and surprising. Prefer:
- Reading an explicit purpose from metadata in phase‑1 (with validation), and
- Falling back to
"batch"only when no explicit purpose is provided.
1066-1079:setResumableUploadFinalStatusshould not rely onfmt.Printffor logging.Functionally, this callback correctly:
- Sets
X-Goog-Upload-Status: final.- Logs the response and status code.
Swap
fmt.Printffor the integration’s logger (or remove the logs) to avoid writing debug output directly to stdout in production.
1081-1113: Resumable phase‑2 provider/filename extraction is good; debug print should use logger.
extractGeminiResumableUploadParams:
- Validates
session_id.- Loads provider + original filename from the session, defaulting provider to Gemini.
- Cleans up the session entry after extraction.
- Stores both provider and original filename into the Bifrost context.
This is the right place to bind provider and original display name. The
[DEBUG]fmt.Printfshould be converted to structured logging or removed.
45-49: Handlecrypto/rand.Readfailure when generating session IDs.
generateSessionIDignores the error fromrand.Read, which can silently fall back to zeroed bytes and weaken ID unpredictability if the RNG fails. At minimum, check the error and either panic/log‑fatal or return a clearly invalid ID.func generateSessionID() string { bytes := make([]byte, 16) - rand.Read(bytes) + if _, err := rand.Read(bytes); err != nil { + // Treat this as unrecoverable; adjust to your logging policy. + panic(fmt.Sprintf("failed to generate secure session ID: %v", err)) + } return hex.EncodeToString(bytes) }
438-501: ValidateSrctype and contents explicitly for batch create.
GeminiBatchCreateRequestSDK.Srcacceptsstring(file ref) or[]interface{}(inline requests), but unsupported types and non‑map items in the slice are silently ignored, letting an invalidsrcbubble down and trigger the generic"either input_file_id or requests must be provided"error in the provider. That makes debugging client mistakes hard.Consider validating at this layer:
- If
Srcis neitherstringnor[]interface{}: return a clear 400 error like “srcmust be a string (file reference) or array of request objects”.- If
Srcis[]interface{}, require each element to be amap[string]interface{}(or a strongly typed inline struct); if any element is not, return a 400 with a precise message rather than silently skipping.This keeps API feedback actionable and avoids opaque downstream failures.
867-934: Critical: multipart/related parsing corrupts binary file content.
parseGeminiMultipartRelatedconverts the raw body to a string and back:
parts := strings.Split(string(body), delimiter)- Later assigns file content via
uploadReq.File = []byte(content).This will corrupt arbitrary binary data, since
string(body)assumes UTF‑8 and may mangle embedded NULs or arbitrary bytes. All splitting and slicing must be done on[]byte.Refactor to operate purely on
[]byte, for example:- delimiter := "--" + boundary - parts := strings.Split(string(body), delimiter) + delimiter := []byte("--" + boundary) + parts := bytes.Split(body, delimiter) - headerEnd := strings.Index(part, "\r\n\r\n") + headerEnd := bytes.Index(part, []byte("\r\n\r\n")) // ... - headers := part[:headerEnd] - content := part[headerEnd:] + headers := string(part[:headerEnd]) // headers can be treated as text + content := part[headerEnd:] - uploadReq.File = []byte(content) + uploadReq.File = append([]byte(nil), content...) // copy raw bytesEnsure all other operations (e.g., trimming, JSON unmarshal of metadata) only convert the header/JSON portion to string, never the raw file bytes.
1115-1163: Honor reverse‑proxy headers when constructing resumable upload URL.
extractGeminiFileUploadParamsbuilds the phase‑2 upload URL usingctx.IsTLS()andctx.Host(), which will reflect the internal server address when running behind a reverse proxy. Clients will then receive an unusable URL.Follow the existing pattern in
session.goand elsewhere:
- Prefer
X-Forwarded-Proto(orForwarded) to determinehttpvshttps.- Prefer
X-Forwarded-Hostfor the host when present.- Fall back to
ctx.IsTLS()/ctx.Host()only if no forwarded headers exist.This ensures the upload URL matches the external address seen by clients.
transports/bifrost-http/integrations/router.go (2)
855-893: Removefmt.Printfdebug logs from file upload handling.Within
handleFileRequest’sFileUploadRequestcase, multiple[DEBUG]fmt.Printfcalls log provider, purpose, filename, errors, callbacks, and converter activity. These should not be left in the router:
- They bypass the structured logger and clutter stdout.
- They can leak filenames/purposes in logs unexpectedly.
Either remove them entirely or replace with
g.logger.Debug(...)at the appropriate points if this visibility is still needed.
1000-1012: Remove debugfmt.Printfon file response conversion errors.The final
if err != nilblock logs conversion errors viafmt.Printfbefore sending an error response:if err != nil { fmt.Printf("[DEBUG] router: file response conversion error: %v\n", err) g.sendError(...) }This should either:
- Use
g.logger.Debug/Errorwith structured context, or- Drop the print entirely and rely on centralized logging.
Avoiding direct
fmt.Printfkeeps logging consistent and production‑safe.core/providers/gemini/batch.go (2)
870-896: Fix BatchStats math to avoid negative or inconsistent counts.The current logic:
RequestCount: resp.RequestCounts.Total, PendingRequestCount: resp.RequestCounts.Total - resp.RequestCounts.Completed, SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed,assumes:
Totalis always populated, andCompletedincludes both successes and failures.But in some flows (e.g.,
BatchRetrieve, listing),Totalis zero whileCompleted/Failedare non‑zero, yielding negative pending counts and incorrect success numbers.Recommend deriving totals defensively, e.g.:
total := resp.RequestCounts.Total if total == 0 { total = resp.RequestCounts.Completed + resp.RequestCounts.Failed } success := resp.RequestCounts.Completed if success < 0 { success = 0 } pending := total - (success + resp.RequestCounts.Failed) if pending < 0 { pending = 0 } BatchStats: &GeminiBatchStats{ RequestCount: total, PendingRequestCount: pending, SuccessfulRequestCount: success, }and mirror the same computation in
ToGeminiBatchRetrieveResponseandToGeminiBatchListResponseso invariants hold (RequestCount ≥ 0,PendingRequestCount ≥ 0, andRequestCount ≈ success + failures + pending).
935-972:ToGeminiBatchListResponsefollows the same pattern; reuse fixed stats computation.The list converter builds
GeminiBatchJobResponseSDKentries from the list data, wiring IDs, states, and timestamps into metadata. Apply the same defensive stats computation here to avoid negative pending counts whenTotalis zero or not provided by upstream.
🧹 Nitpick comments (5)
core/providers/gemini/files.go (2)
466-470: Misleading comment: fields use camelCase, not snake_case.The comment states "Uses snake_case field names to match Google's API format," but
GeminiFileResponseuses camelCase JSON tags (e.g.,displayName,mimeType,sizeBytes). This is actually correct for Google's Gemini API. Consider updating the comment to reflect the actual casing.// ToGeminiFileUploadResponse converts a Bifrost file upload response to Gemini format. -// Uses snake_case field names to match Google's API format. +// Uses camelCase field names to match Google's Gemini API format. // GeminiFileUploadResponseWrapper is a wrapper that contains the file response for the upload API. type GeminiFileUploadResponseWrapper struct { File GeminiFileResponse `json:"file"` }
472-485: Consider preserving actual MIME type if available.The
MimeTypeis hardcoded to"application/octet-stream". While this is a safe fallback, it may not accurately represent the actual file type. If the original upload captured the MIME type, consider storing it inBifrostFileUploadResponseand using it here.transports/bifrost-http/integrations/utils.go (1)
194-194: Redundant Content-Length header setting.
fasthttp.RequestCtx.SetBody()automatically sets theContent-Lengthheader based on the body size. This explicit header setting is not harmful but is unnecessary.transports/bifrost-http/integrations/genai.go (2)
712-778: Batch ID extractors correctly normalize IDs but share duplicated logic.Both
extractGeminiBatchIDFromPathandextractGeminiBatchIDFromPathCancel:
- Default provider from header.
- Enforce
batch_idpresence and non‑empty string.- Normalize IDs to the
batches/<id>format, trimming:cancelwhere appropriate.Consider extracting the shared “read + normalize batch_id” steps into a small helper to avoid drift between the two code paths, but the current behavior is functionally sound.
984-1009: Raw body upload handling is fine but doesn’t derive MIME type.The raw upload path copies the entire body into
uploadReq.File, and optionally extractsfilenamefromContent-Disposition. It leavesMimeTypeunset, which may be acceptable if downstream code infers MIME type elsewhere; if not, consider populating it fromContent-Typeas a best‑effort hint.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (25)
core/bifrost.go(1 hunks)core/providers/anthropic/batch.go(1 hunks)core/providers/azure/batch.go(1 hunks)core/providers/bedrock/batch.go(1 hunks)core/providers/cerebras/batch.go(1 hunks)core/providers/cohere/batch.go(1 hunks)core/providers/elevenlabs/batch.go(1 hunks)core/providers/gemini/batch.go(1 hunks)core/providers/gemini/files.go(1 hunks)core/providers/gemini/types.go(1 hunks)core/providers/groq/batch.go(1 hunks)core/providers/mistral/batch.go(1 hunks)core/providers/ollama/batch.go(1 hunks)core/providers/openai/batch.go(1 hunks)core/providers/openrouter/batch.go(1 hunks)core/providers/parasail/batch.go(1 hunks)core/providers/perplexity/batch.go(1 hunks)core/providers/sgl/batch.go(1 hunks)core/providers/vertex/batch.go(1 hunks)core/schemas/batch.go(1 hunks)core/schemas/bifrost.go(1 hunks)core/schemas/provider.go(3 hunks)transports/bifrost-http/integrations/genai.go(14 hunks)transports/bifrost-http/integrations/router.go(9 hunks)transports/bifrost-http/integrations/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (8)
- core/providers/bedrock/batch.go
- core/providers/ollama/batch.go
- core/providers/cerebras/batch.go
- core/providers/azure/batch.go
- core/schemas/bifrost.go
- core/providers/perplexity/batch.go
- core/providers/anthropic/batch.go
- core/providers/mistral/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/providers/openrouter/batch.gocore/providers/openai/batch.gocore/providers/parasail/batch.gocore/schemas/provider.gocore/providers/groq/batch.gocore/bifrost.gocore/providers/vertex/batch.gocore/providers/sgl/batch.gocore/schemas/batch.gocore/providers/cohere/batch.gotransports/bifrost-http/integrations/genai.gocore/providers/gemini/batch.gotransports/bifrost-http/integrations/utils.gocore/providers/gemini/files.gocore/providers/elevenlabs/batch.gocore/providers/gemini/types.gotransports/bifrost-http/integrations/router.go
🧬 Code graph analysis (10)
core/providers/openai/batch.go (4)
core/providers/openai/openai.go (1)
OpenAIProvider(24-30)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/parasail/batch.go (2)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)
core/schemas/provider.go (2)
core/schemas/bifrost.go (2)
BatchDeleteRequest(103-103)BifrostError(364-373)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)
core/providers/groq/batch.go (4)
core/providers/groq/groq.go (1)
GroqProvider(16-21)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/vertex/batch.go (4)
core/providers/vertex/vertex.go (1)
VertexProvider(57-62)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/sgl/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/schemas/batch.go (2)
core/schemas/provider.go (1)
Provider(313-362)core/schemas/bifrost.go (2)
ModelProvider(32-32)BifrostResponseExtraFields(295-304)
core/providers/cohere/batch.go (4)
core/providers/cohere/cohere.go (1)
CohereProvider(67-73)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/gemini/files.go (1)
core/schemas/files.go (1)
BifrostFileUploadResponse(89-105)
core/providers/elevenlabs/batch.go (2)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
🔇 Additional comments (34)
core/providers/elevenlabs/batch.go (1)
35-38: LGTM!The
BatchDeletestub follows the established pattern for unsupported operations, consistent with the otherBatch*methods in this file. The method signature correctly uses the newBifrostBatchDeleteRequestandBifrostBatchDeleteResponsetypes.core/providers/vertex/batch.go (1)
35-38: LGTM!The
BatchDeletestub correctly follows the established pattern for unsupported operations, consistent with the otherBatch*methods in this file.core/providers/openrouter/batch.go (1)
35-38: LGTM!The
BatchDeletestub correctly follows the established pattern for unsupported operations, consistent with the otherBatch*methods in this file.core/providers/cohere/batch.go (1)
35-38: LGTM!The BatchDelete implementation follows the established pattern for unsupported batch operations in this provider.
core/schemas/provider.go (3)
182-182: LGTM!The BatchDelete field addition to AllowedRequests is correctly placed and follows the naming conventions.
231-232: LGTM!The BatchDeleteRequest case in IsOperationAllowed correctly returns the BatchDelete field and maintains consistency with other batch operations.
350-351: LGTM!The BatchDelete method addition to the Provider interface follows the established pattern for batch operations with appropriate signature and documentation.
core/providers/openai/batch.go (1)
597-600: LGTM!The BatchDelete implementation correctly returns an unsupported operation error, consistent with the pattern used across other providers.
core/providers/groq/batch.go (1)
35-38: LGTM!The BatchDelete implementation follows the consistent pattern for unsupported batch operations across all providers.
core/schemas/batch.go (2)
265-272: LGTM!The BifrostBatchDeleteRequest struct follows the established pattern for batch request types with appropriate fields and JSON tags.
274-281: LGTM!The BifrostBatchDeleteResponse struct is well-designed with appropriate fields, including a Deleted boolean to indicate success, and follows the standard response pattern.
core/providers/sgl/batch.go (1)
35-38: LGTM!The BatchDelete implementation is consistent with the unsupported operation pattern used across all provider stubs.
core/providers/parasail/batch.go (1)
35-38: LGTM!The BatchDelete implementation follows the established pattern for unsupported batch operations.
core/bifrost.go (1)
1133-1209: LGTM!The BatchDeleteRequest public API method is well-implemented and follows the exact pattern established by other batch operations (BatchCancel, BatchRetrieve, BatchResults). The implementation includes:
- Proper input validation for nil request, missing provider, and missing batch ID
- Consistent error handling and messaging
- Appropriate base provider type determination for custom providers
- Key selection logic for providers that require authentication
- Retry logic through the executeRequestWithRetries helper
- Error augmentation with request metadata
transports/bifrost-http/integrations/genai.go (9)
207-261: Resumable phase‑2 file upload routing and response enrichment look correct.The POST/PUT resumable routes correctly reuse the same parser, map
GeminiFileUploadRequest→BifrostFileUploadRequest, propagate provider from context, and usePostCallbackto setX-Goog-Upload-Statusand the original filename when missing. This wiring matches the resumable init flow and Bifrost’s file API shape.
263-314: PUT variant for resumable uploads is consistent with POST.The PUT route mirrors the POST resumable handler: same parser, provider resolution, and response conversion, differing only in HTTP method. This maintains compatibility with clients that prefer PUT without duplicating logic.
663-684: Batch create pre‑callback correctly normalizes provider and model.
extractGeminiBatchCreateParamssensibly defaults the provider header toGemini, stores it on the Bifrost context, and strips:batchGenerateContentfrom the path model segment before assigning to the SDK request. This keeps both provider and model consistent between router and provider.
686-710: Batch list query parsing is straightforward and robust.The pre‑callback maps
x-model-providerinto the Bifrost context and parsespageSize/pageTokenfrom query args intoGeminiBatchListRequestSDK, ignoring parse failures gracefully. That’s a reasonable, non‑surprising behavior.
936-982: Multipart/form‑data upload path looks correct and falls back to filename when metadata missing.
parseGeminiMultipartUpload:
- Reads the
metadataJSON if present and setsuploadReq.Filename.- Reads the file content into a correctly sized buffer.
- Falls back to
fileHeader.Filenamewhen metadata omitted.This is a reasonable mapping from the SDK’s multipart format to
GeminiFileUploadRequest.
1171-1193: File list query param extraction is consistent with other extractors.The code:
- Defaults provider from
x-model-provider.- Parses
pageSizeandpageTokenintoGeminiFileListRequest(LimitandAfter).This mirrors the batch list flow and seems correct.
1195-1219: Retrieve‑file extractor correctly validatesfile_idand patches the SDK request.
extractGeminiFileRetrieveParams:
- Sets provider in context from header (default Gemini).
- Validates that
file_idexists and is non‑empty.- Copies it into the typed
GeminiFileRetrieveRequest.Nothing stands out as problematic here.
1221-1245: Delete‑file extractor matches retrieve‑file pattern.The delete extractor:
- Handles provider exactly like the retrieve path.
- Validates
file_idpresence and non‑empty string.- Sets
FileIDonGeminiFileDeleteRequest.This is consistent and should interoperate cleanly with the router’s
FileDeleteRequestpath.
1247-1255: GenAI router wiring includes batch routes as expected.
NewGenAIRouternow appendsCreateGenAIBatchRouteConfigsto the same/genaiprefix used by existing chat and file routes. This cleanly exposes the new batch surface without changing existing endpoints.core/providers/gemini/batch.go (3)
847-868:ToGeminiJobStatemapping looks reasonable.The mapping from internal
schemas.BatchStatusto SDK job states is sensible (e.g.,InProgress→RUNNING,Completed→SUCCEEDED,Cancelling→CANCELLING). TreatingExpiredasFAILEDis a choice but matches many APIs that consider expiry terminal/error.
898-933:ToGeminiBatchRetrieveResponseconversion is consistent aside from stats math.The retrieve converter correctly:
- Propagates
ID,Status, timestamps, and operation name.- Sets
Dest.FileNamewhenOutputFileIDis present.Once the BatchStats calculation is fixed as above, this will be a solid SDK mapping.
986-1043: BatchDelete implementation and URL construction look correct.
BatchDelete:
- Checks provider capability via
CheckOperationAllowed.- Validates
BatchID.- Constructs the delete URL for both
batches/<id>and raw<id>forms.- Uses DELETE, attaches API key, and treats 200/204 as success, otherwise defers to
parseGeminiError.- Returns a minimal
BifrostBatchDeleteResponsewith latency metadata.This matches the router’s new delete path and should behave well against the Gemini API.
transports/bifrost-http/integrations/router.go (5)
80-89: BatchRequest extension for delete operations is coherent.Adding
DeleteRequest *schemas.BifrostBatchDeleteRequesttoBatchRequestmatches the expanded batch API surface and integrates cleanly with the existingTypeswitch inhandleBatchRequest.
159-162: NewBatchDeleteResponseConvertertype fits the existing converter pattern.Defining a dedicated converter for
BifrostBatchDeleteResponsekeeps RouteConfig extensible and mirrors the other batch response converters, making integration code uniform.
271-301: RouteConfig additions for batch delete look correct.Including
BatchDeleteResponseConverterinRouteConfigand usingBatchCreateRequestConverteras the generic batch converter is consistent with how batch, file, and inference routes are distinguished inRegisterRoutesandcreateHandler.
440-454: Batch routing detection and dispatch are cleanly integrated.Using
config.BatchCreateRequestConverter != nilto identify batch routes, then delegating tohandleBatchRequestwith a strongly typedBatchRequest, aligns with the existing inference/file split. The earlydefer cancel()ensures contexts are cleaned up for batch routes.
702-853: Batch delete handling mirrors other batch operations.The new
schemas.BatchDeleteRequestbranch:
- Validates
DeleteRequestnon‑nil.- Calls
g.client.BatchDeleteRequest.- Applies
PostCallbackand optionalBatchDeleteResponseConverterbefore sending success.This mirrors create/list/retrieve/cancel/results patterns and should be easy for integration code to reason about.
core/providers/gemini/types.go (3)
1578-1590: SDK job state constants align with documented GenAI SDK states.Defining
GeminiJobState*alongside internal batch states clarifies the distinction between internal/provider statuses and SDK‑visible states and supports the new converters inbatch.go. Names and values look consistent.
1591-1651: New SDK batch types provide a clean surface for router ↔ provider integration.The added SDK‑facing types:
GeminiBatchCreateRequestSDK(with flexibleSrc),- Inline request/config structs,
GeminiBatchJobResponseSDK, list/request/cancel/delete SDK structs,match the payloads expected by the Google GenAI SDK and give the HTTP layer a strongly typed target for JSON unmarshaling. Combined with the converters in
batch.goand the request mappers ingenai.go, this is a solid abstraction boundary.
1655-1662: GeminiFileUploadRequest redesign matches resumable upload needs.Changing
GeminiFileUploadRequestto:
- Keep raw
File []byteandResumableSessionIDinternal (json:"-"),- Expose
filename,mimeType, andpurposeas JSON fields,fits the resumable upload protocol: phase‑1 carries metadata only, phase‑2 carries bytes, and the router/provider can mix these via in‑memory fields without leaking internal details on the wire.
23804b1 to
cc79e00
Compare
53d30ec to
5dffad9
Compare
cc79e00 to
72a73aa
Compare
5dffad9 to
db0afec
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (9)
transports/bifrost-http/integrations/genai.go (7)
45-49: Handle error fromcrypto/rand.Read.While
crypto/rand.Readrarely fails, ignoring the error could lead to predictable session IDs if the random source is unavailable, which is a security concern for session hijacking.func generateSessionID() string { bytes := make([]byte, 16) - rand.Read(bytes) + if _, err := rand.Read(bytes); err != nil { + // Fallback - this should rarely happen + return fmt.Sprintf("%d", time.Now().UnixNano()) + } return hex.EncodeToString(bytes) }
52-67: Provide a mechanism to stop the cleanup goroutine for graceful shutdown.The cleanup goroutine runs indefinitely with no way to stop it. This prevents clean application termination.
+var cleanupStop = make(chan struct{}) + func init() { go func() { ticker := time.NewTicker(10 * time.Minute) + defer ticker.Stop() for { - for range ticker.C { + select { + case <-ticker.C: now := time.Now() uploadSessions.Range(func(key, value interface{}) bool { if session, ok := value.(*uploadSession); ok { if now.Sub(session.CreatedAt) > time.Hour { uploadSessions.Delete(key) } } return true }) + case <-cleanupStop: + return } } }() }
240-254: Remove debugfmt.Printfstatements before merging.There are numerous
[DEBUG]print statements throughout this file that should be removed or replaced with a proper logger with configurable log levels. These debug statements are present at lines 240, 249, 253-254, 792-793, 814, 818, 824, 835, 841, 854, 860, 1013, 1022, 1034, 1046, 1062-1063, 1072-1077, 1108, 1125, 1129, 1147-1148.
885-931: Critical: Binary data corruption in multipart/related parsing.The file content is corrupted because the request body is converted to a string at line 887 (
strings.Split(string(body), delimiter)), then extracted as a string, and finally converted back to bytes at line 929. Go strings are UTF-8 encoded; binary data containing null bytes or invalid UTF-8 sequences will be lost or replaced. This breaks binary file uploads.Use
bytes.Split()orbytes.Index()to preserve binary data integrity throughout the parsing process.
1056-1056: Hardcoded purpose may be incorrect for non-batch uploads.The purpose is hardcoded to
"batch"for all file uploads via the resumable upload API. If this endpoint is used for other purposes (e.g., general file storage for chat context), this default may be incorrect. Consider allowing the purpose to be specified in the metadata during phase 1.
1138-1148: Support reverse proxy headers for resumable upload URL construction.The upload URL uses
ctx.Host()andctx.IsTLS()directly without consideringX-Forwarded-ProtoandX-Forwarded-Hostheaders. When Bifrost runs behind a reverse proxy, the URL will reference internal host/scheme instead of the client-facing address, causing subsequent upload requests to fail.// Build the upload URL for phase 2 - // Use the request's host and scheme to build the URL - scheme := "http" - if ctx.IsTLS() { - scheme = "https" + scheme := string(ctx.Request.Header.Peek("X-Forwarded-Proto")) + if scheme == "" { + if ctx.IsTLS() { + scheme = "https" + } else { + scheme = "http" + } + } + host := string(ctx.Request.Header.Peek("X-Forwarded-Host")) + if host == "" { + host = string(ctx.Host()) } - host := string(ctx.Host())
1066-1079: Remove debug prints fromsetResumableUploadFinalStatus.The debug logging marshaling the full response JSON (lines 1072-1077) should be removed or moved to the structured logger. This is consistent with the feedback to remove all
[DEBUG]print statements.core/providers/gemini/batch.go (1)
871-896: [Duplicate] Defensive stats computation still needed to avoid negative pending counts.This is the same issue flagged in the previous review. The converters
ToGeminiBatchJobResponse(line 889),ToGeminiBatchRetrieveResponse(line 916), andToGeminiBatchListResponse(line 955) all computePendingRequestCountasTotal - Completed, which can become negative whenTotal == 0butCompleted > 0.As noted in the earlier review,
BatchRetrievepopulatesRequestCounts.CompletedandRequestCounts.Failedbut leavesTotalat zero. This will cause the SDK converters to produce invalid negative pending counts.The recommended fix from the previous review should be applied:
+ total := resp.RequestCounts.Total + if total == 0 { + total = resp.RequestCounts.Completed + resp.RequestCounts.Failed + } + pending := total - resp.RequestCounts.Completed - resp.RequestCounts.Failed + if pending < 0 { + pending = 0 + } + result.Metadata = &GeminiBatchMetadata{ Name: resp.ID, State: ToGeminiJobState(resp.Status), CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339), BatchStats: &GeminiBatchStats{ - RequestCount: resp.RequestCounts.Total, - PendingRequestCount: resp.RequestCounts.Total - resp.RequestCounts.Completed, - SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed, + RequestCount: total, + PendingRequestCount: pending, + SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed, }, }Apply the same pattern to all three converter functions.
Also applies to: 899-933, 936-972
transports/bifrost-http/integrations/router.go (1)
870-894: [Duplicate] Remove debug fmt.Printf statements before merging.These debug statements are still present from the previous review. As noted before, they should be removed or replaced with structured logging using
g.logger.Debug(...):Lines to address:
- Line 870:
fmt.Printf("[DEBUG] router: calling FileUploadRequest...- Lines 873-877: Error message construction and debug print
- Line 881:
fmt.Printf("[DEBUG] router: FileUploadRequest success...- Lines 883-885: PostCallback debug prints
- Lines 889, 894: FileUploadResponseConverter debug prints
- Line 1012:
fmt.Printf("[DEBUG] router: file response conversion error...If debug logging is needed for production, use the structured logger instead of raw stdout.
Also applies to: 1012-1012
🧹 Nitpick comments (4)
core/providers/gemini/files.go (2)
466-470: Misleading comment: JSON tags use camelCase, not snake_case.The comment states "Uses snake_case field names" but
GeminiFileResponseuses camelCase JSON tags (displayName,mimeType,sizeBytes, etc.). Consider updating the comment to accurately reflect the format.-// Uses snake_case field names to match Google's API format. +// Uses camelCase field names to match Google's API format.
472-485: Hardcoded MimeType may not reflect actual file type.
MimeTypeis hardcoded to"application/octet-stream". If the original upload request or response contains the actual MIME type, consider propagating it instead of using a generic fallback.transports/bifrost-http/integrations/genai.go (1)
503-531: Consider renaming converter field for clarity.The
BatchCreateRequestConverterfield is being used forBatchListRequest(line 511),BatchRetrieveRequest(line 553),BatchCancelRequest(line 591), andBatchDeleteRequest(line 629). While functionally correct since it returns the appropriateBatchRequest.Type, using a more generic field name likeBatchRequestConverterwould improve readability and reduce confusion.transports/bifrost-http/integrations/router.go (1)
92-99: Note: FileRequest.DeleteRequest field appears unused.The
DeleteRequestfield was added toFileRequest(line 97), but it's not referenced inhandleFileRequest(lines 858-1023). The existingFileDeleteRequestcase (lines 952-972) already handles file deletion usingfileReq.DeleteRequest.If this field is intended for future use, consider adding a comment. Otherwise, it may be unnecessary at this time.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (25)
core/bifrost.go(1 hunks)core/providers/anthropic/batch.go(1 hunks)core/providers/azure/batch.go(1 hunks)core/providers/bedrock/batch.go(1 hunks)core/providers/cerebras/batch.go(1 hunks)core/providers/cohere/batch.go(1 hunks)core/providers/elevenlabs/batch.go(1 hunks)core/providers/gemini/batch.go(1 hunks)core/providers/gemini/files.go(1 hunks)core/providers/gemini/types.go(1 hunks)core/providers/groq/batch.go(1 hunks)core/providers/mistral/batch.go(1 hunks)core/providers/ollama/batch.go(1 hunks)core/providers/openai/batch.go(1 hunks)core/providers/openrouter/batch.go(1 hunks)core/providers/parasail/batch.go(1 hunks)core/providers/perplexity/batch.go(1 hunks)core/providers/sgl/batch.go(1 hunks)core/providers/vertex/batch.go(1 hunks)core/schemas/batch.go(1 hunks)core/schemas/bifrost.go(1 hunks)core/schemas/provider.go(3 hunks)transports/bifrost-http/integrations/genai.go(14 hunks)transports/bifrost-http/integrations/router.go(9 hunks)transports/bifrost-http/integrations/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (11)
- core/providers/vertex/batch.go
- core/schemas/bifrost.go
- core/providers/bedrock/batch.go
- core/providers/cohere/batch.go
- transports/bifrost-http/integrations/utils.go
- core/providers/openai/batch.go
- core/providers/ollama/batch.go
- core/providers/anthropic/batch.go
- core/bifrost.go
- core/schemas/provider.go
- core/providers/parasail/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/providers/azure/batch.gocore/providers/mistral/batch.gocore/providers/elevenlabs/batch.gocore/providers/sgl/batch.gocore/providers/cerebras/batch.gocore/providers/openrouter/batch.gocore/providers/gemini/files.gocore/providers/groq/batch.gocore/providers/perplexity/batch.gotransports/bifrost-http/integrations/router.gotransports/bifrost-http/integrations/genai.gocore/providers/gemini/batch.gocore/providers/gemini/types.gocore/schemas/batch.go
🧬 Code graph analysis (9)
core/providers/mistral/batch.go (2)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/elevenlabs/batch.go (3)
core/providers/elevenlabs/elevenlabs.go (1)
ElevenlabsProvider(23-29)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/sgl/batch.go (4)
core/providers/sgl/sgl.go (1)
SGLProvider(18-23)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/cerebras/batch.go (4)
core/providers/cerebras/cerebras.go (1)
CerebrasProvider(16-21)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/openrouter/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/groq/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/perplexity/batch.go (4)
core/providers/perplexity/perplexity.go (1)
PerplexityProvider(19-24)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
transports/bifrost-http/integrations/router.go (4)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)transports/bifrost-http/integrations/genai.go (1)
ErrResumableUploadInit(36-36)core/schemas/bifrost.go (2)
BatchDeleteRequest(103-103)FileUploadRequest(104-104)core/schemas/provider.go (1)
Provider(313-362)
core/providers/gemini/batch.go (3)
core/schemas/batch.go (3)
BatchStatus(5-5)BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/providers/gemini/types.go (5)
GeminiBatchJobResponseSDK(1618-1624)GeminiBatchMetadata(1499-1512)GeminiBatchStats(1433-1437)GeminiBatchDest(1533-1536)GeminiBatchListResponseSDK(1627-1630)core/providers/utils/utils.go (4)
CheckOperationAllowed(408-420)NewBifrostOperationError(449-460)SetExtraHeaders(178-208)MakeRequestWithContext(39-93)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
🔇 Additional comments (17)
core/providers/azure/batch.go (1)
437-441: LGTM!The
BatchDeletemethod correctly returns an unsupported operation error, consistent with the pattern used by other providers that don't support batch deletion.core/schemas/batch.go (1)
265-282: LGTM!The new
BifrostBatchDeleteRequestandBifrostBatchDeleteResponsetypes are well-structured and consistent with existing batch request/response patterns in the codebase.core/providers/openrouter/batch.go (1)
35-39: LGTM!The
BatchDeletemethod follows the same pattern as other batch operations in this provider, correctly returning an unsupported operation error.core/providers/mistral/batch.go (1)
35-39: LGTM!The
BatchDeletemethod is consistent with the existing batch operation stubs in this provider.core/providers/cerebras/batch.go (1)
35-39: LGTM!The
BatchDeletemethod is consistent with the existing batch operation stubs in this provider.core/providers/sgl/batch.go (1)
35-38: LGTM!The
BatchDeleteimplementation correctly follows the established pattern for unsupported batch operations, consistent with otherBatch*methods in this file and across other providers.core/providers/perplexity/batch.go (1)
35-38: LGTM!Consistent implementation of unsupported
BatchDeleteoperation, matching the pattern used by other providers.core/providers/elevenlabs/batch.go (1)
35-38: LGTM!The
BatchDeleteimplementation follows the established unsupported operation pattern consistently.core/providers/groq/batch.go (1)
35-38: LGTM!Consistent
BatchDeletestub matching the established pattern across providers.transports/bifrost-http/integrations/genai.go (1)
663-778: LGTM - Batch parameter extraction functions.The extraction functions follow a consistent pattern: extract provider from header (defaulting to Gemini), validate required path parameters, and properly format batch IDs with the
batches/prefix. Good input validation with clear error messages.core/providers/gemini/batch.go (1)
986-1043: LGTM! BatchDelete implementation is solid.The implementation correctly:
- Validates operation permissions and required parameters
- Handles both batch ID formats (
batches/xxxand bare IDs)- Uses HTTP DELETE method appropriately
- Accepts both 200 OK and 204 No Content as success responses
- Returns properly structured BifrostBatchDeleteResponse
transports/bifrost-http/integrations/router.go (3)
159-161: LGTM! BatchDeleteResponseConverter follows established patterns.The new converter type and RouteConfig field are consistent with other batch operation converters (Create, List, Retrieve, Cancel, Results).
Also applies to: 291-291
424-428: LGTM! Clean handling for resumable upload initialization.The special-case handling for
ErrResumableUploadInitallows the PreCallback to handle resumable upload session initialization and write its own response, then return gracefully without error propagation. This is an appropriate pattern for this use case.
822-842: LGTM! Batch delete handling is consistent with existing patterns.The
BatchDeleteRequestcase follows the same structure as other batch operations:
- Validates the request
- Invokes the client method
- Executes optional PostCallback
- Applies optional response converter
core/providers/gemini/types.go (3)
1578-1589: LGTM! SDK job state constants are well-defined.The new
GeminiJobState*constants are clearly documented as Google GenAI SDK-specific states and are properly distinguished from the internalGeminiBatchState*constants. The naming and values align with the SDK format.
1591-1651: LGTM! SDK batch types are correctly structured.The new SDK-compatible batch types properly mirror the Google GenAI SDK format:
GeminiBatchCreateRequestSDK.Srcasinterface{}appropriately supports both string (file reference) and array (inline requests) inputs- Request/response types include all necessary fields with correct JSON tags
- The types enable SDK-compatible batch operations while maintaining type safety
Note: The past review comment about
Srcfield validation applies to the transport layer (transports/bifrost-http/integrations/genai.go), not to these type definitions.
1656-1662: LGTM! GeminiFileUploadRequest refactor supports resumable uploads.The changes appropriately:
- Add
MimeTypefield for explicit content type specification (replacing implicit provider-based handling)- Add
ResumableSessionIDfor resumable upload session tracking- Remove
Providerfield, reducing dependency on the schemas packageThese modifications align with the PR's goal of supporting resumable uploads and refactoring file handling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (11)
transports/bifrost-http/integrations/utils.go (1)
194-196: Avoid logging full response body insendSuccess.This debug
fmt.Printflogs the entire JSON payload, which can leak PII/secrets and is costly on hot paths. The explicitContent-Lengthheader is fine; the body log is not.Apply this diff to drop the unsafe log while keeping the header:
- ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody))) - ctx.SetBody(responseBody) - fmt.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d, body=%s\n", len(responseBody), string(responseBody)) + ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody))) + ctx.SetBody(responseBody)If you still want debug observability, prefer a structured log that only includes status and
len(responseBody), without the body contents.transports/bifrost-http/integrations/router.go (1)
864-895: Removefmt.Printfdebug logs or switch to structured loggerThere are several
[DEBUG]fmt.Printfcalls in the file-upload path (FileUploadRequest, post-callback, response conversion, and final error logging). These bypass the structuredg.loggerand will spam stdout in production. Prior review already requested their removal or replacement.Recommend deleting them or converting to
g.logger.Debug(...)with structured fields; keep user-visible behavior unchanged.Also applies to: 1011-1013
core/providers/gemini/batch.go (1)
870-896: Fix BatchStats math to avoid negative pending/success countsAll three SDK converters currently derive stats as:
RequestCount: resp.RequestCounts.TotalPendingRequestCount: resp.RequestCounts.Total - resp.RequestCounts.CompletedSuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.FailedFor Gemini batches where
Totalis left at 0 butCompleted/Failedare non-zero (e.g.,BatchRetrieveonly sets those fields), this produces negative pending/successful counts, which is invalid.Derive totals defensively and clamp pending to ≥ 0, e.g.:
+func deriveStats(counts schemas.BatchRequestCounts) (total, pending, success int) { + total = counts.Total + if total == 0 { + total = counts.Completed + counts.Failed + } + success = counts.Completed + if success < 0 { + success = 0 + } + pending = total - (success + counts.Failed) + if pending < 0 { + pending = 0 + } + return +}and then in each converter:
- BatchStats: &GeminiBatchStats{ - RequestCount: resp.RequestCounts.Total, - PendingRequestCount: resp.RequestCounts.Total - resp.RequestCounts.Completed, - SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed, - }, + total, pending, success := deriveStats(resp.RequestCounts) + BatchStats: &GeminiBatchStats{ + RequestCount: total, + PendingRequestCount: pending, + SuccessfulRequestCount: success, + },Apply consistently in
ToGeminiBatchJobResponse,ToGeminiBatchRetrieveResponse, andToGeminiBatchListResponse.Also applies to: 898-933, 935-972
transports/bifrost-http/integrations/genai.go (8)
23-67: Session store and cleanup goroutine need shutdown hookThe
uploadSessionsmap plus ticker-based cleanup is fine, but theinit()goroutine runs forever with no way to stop it, which complicates graceful shutdown and tests.Consider adding a package-level stop channel or context so the cleanup loop can exit cleanly on server shutdown (as previously suggested), and ensure
ticker.Stop()is called.
44-49: Handle error fromcrypto/rand.Readwhen generating session IDs
generateSessionIDignores the error fromrand.Read, which can (in rare cases) yield predictable or zeroed IDs if the random source fails.Handle the error and either:
- return a
(string, error)and propagate it, or- fall back to a clearly best-effort ID (e.g., time-based) with a comment explaining why.
In Go, what is the recommended way to handle errors from crypto/rand.Read for generating session IDs?
808-865: Resumable upload phase-1 logic is fine but debug prints should be removedThe phase-1 handler:
- Parses JSON metadata, stores filename/MIME size in
uploadSession.- Generates a session ID and stores it on
uploadReqfor the PreCallback to consume.- Falls back gracefully if metadata unmarshal fails.
However, multiple
fmt.Printfcalls log content type, body, parsed metadata, session IDs, and raw content lengths. These should be removed or replaced with the structured logger at an appropriate level before production.
867-934: Fix binary corruption inparseGeminiMultipartRelatedThis function converts the raw body to a string and splits on the boundary:
parts := strings.Split(string(body), delimiter)- Later, file content is reconstituted with
[]byte(content).This corrupts arbitrary binary data (null bytes, non-UTF-8 sequences). Replace all string-based splitting with byte-based operations (
bytes.Split,bytes.Index,bytes.TrimSpace) and keep the file content as[]byteslices end-to-end, only converting JSON metadata segments to bytes/strings as needed forsonic.Unmarshal.This is a correctness blocker for binary uploads.
1011-1064: Resumable phase-2: avoid debug prints and reconsider hardcoded purposePhase-2 handler correctly:
- Validates
session_id.- Loads the session and copies body into
uploadReq.File.- Restores filename/MIME from the session.
- Associates the session ID for later PreCallback use.
Issues:
- Multiple
fmt.Printfdebug logs should be removed or switched to the logger.uploadReq.Purpose = "batch"is hardcoded and may be wrong for non-batch uploads going through this path. Consider deriving purpose from metadata/session or allowing a default that matches the actual API usage.
1066-1079: Drop heavy debug logging fromsetResumableUploadFinalStatusSetting
X-Goog-Upload-Status: finalis correct, but marshalling the entirerespand printing it plus status code on every call is noisy and potentially expensive.Remove these
fmt.Printfcalls (and thesonic.Marshal) or replace them with concise structured logging if needed at debug level.
1081-1112: Resumable phase-2 PreCallback is correct; drop debug print
extractGeminiResumableUploadParams:
- Validates
session_id.- Loads and deletes the session.
- Sets
bifrostContextKeyProviderand stores original filename into context.This is the right place to restore provider/filename. Just remove the
fmt.Printfdebug line and rely on the logger if you still need diagnostics.
1115-1163: Respect reverse-proxy headers when building resumable upload URL and remove debug prints
extractGeminiFileUploadParams:
- Sets provider on
bifrostCtx.- For metadata-only requests with a resumable session, updates the session with provider, constructs a phase-2 URL using
ctx.IsTLS()andctx.Host(), and returnsErrResumableUploadInitafter writing the response.Issues:
- URL construction ignores
X-Forwarded-Proto/X-Forwarded-Host. Behind a reverse proxy, clients will receive an internal host/scheme and subsequent uploads may fail. Mirror the pattern used elsewhere (e.g., insession.go) to prefer forwarded headers and fall back toctx.IsTLS()/ctx.Host().- Several
fmt.Printfdebug statements should be removed or converted to structured logger calls.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (25)
core/bifrost.go(1 hunks)core/providers/anthropic/batch.go(1 hunks)core/providers/azure/batch.go(1 hunks)core/providers/bedrock/batch.go(1 hunks)core/providers/cerebras/batch.go(1 hunks)core/providers/cohere/batch.go(1 hunks)core/providers/elevenlabs/batch.go(1 hunks)core/providers/gemini/batch.go(1 hunks)core/providers/gemini/files.go(1 hunks)core/providers/gemini/types.go(1 hunks)core/providers/groq/batch.go(1 hunks)core/providers/mistral/batch.go(1 hunks)core/providers/ollama/batch.go(1 hunks)core/providers/openai/batch.go(1 hunks)core/providers/openrouter/batch.go(1 hunks)core/providers/parasail/batch.go(1 hunks)core/providers/perplexity/batch.go(1 hunks)core/providers/sgl/batch.go(1 hunks)core/providers/vertex/batch.go(1 hunks)core/schemas/batch.go(1 hunks)core/schemas/bifrost.go(1 hunks)core/schemas/provider.go(3 hunks)transports/bifrost-http/integrations/genai.go(14 hunks)transports/bifrost-http/integrations/router.go(9 hunks)transports/bifrost-http/integrations/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (13)
- core/providers/cerebras/batch.go
- core/providers/elevenlabs/batch.go
- core/providers/vertex/batch.go
- core/schemas/provider.go
- core/providers/gemini/files.go
- core/providers/ollama/batch.go
- core/providers/perplexity/batch.go
- core/providers/anthropic/batch.go
- core/providers/cohere/batch.go
- core/bifrost.go
- core/providers/groq/batch.go
- core/providers/openrouter/batch.go
- core/providers/parasail/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/schemas/bifrost.gocore/providers/bedrock/batch.gotransports/bifrost-http/integrations/utils.gocore/providers/openai/batch.gocore/providers/azure/batch.gocore/providers/sgl/batch.gocore/schemas/batch.gotransports/bifrost-http/integrations/genai.gocore/providers/gemini/batch.gocore/providers/mistral/batch.gotransports/bifrost-http/integrations/router.gocore/providers/gemini/types.go
🧬 Code graph analysis (7)
core/providers/openai/batch.go (4)
core/providers/openai/openai.go (1)
OpenAIProvider(24-30)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/azure/batch.go (3)
core/providers/azure/azure.go (1)
AzureProvider(22-27)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)
core/providers/sgl/batch.go (4)
core/providers/sgl/sgl.go (1)
SGLProvider(18-23)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/schemas/batch.go (2)
core/schemas/provider.go (1)
Provider(313-362)core/schemas/bifrost.go (2)
ModelProvider(32-32)BifrostResponseExtraFields(295-304)
core/providers/gemini/batch.go (4)
core/schemas/batch.go (13)
BatchStatus(5-5)BatchStatusValidating(8-8)BatchStatusInProgress(10-10)BatchStatusFinalizing(11-11)BatchStatusCompleted(12-12)BatchStatusFailed(9-9)BatchStatusCancelling(14-14)BatchStatusCancelled(15-15)BatchStatusExpired(13-13)BifrostBatchRetrieveResponse(152-187)BifrostBatchListResponse(129-140)BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/providers/gemini/types.go (11)
GeminiJobStatePending(1582-1582)GeminiJobStateRunning(1583-1583)GeminiJobStateSucceeded(1584-1584)GeminiJobStateFailed(1585-1585)GeminiJobStateCancelling(1586-1586)GeminiJobStateCancelled(1587-1587)GeminiBatchJobResponseSDK(1618-1624)GeminiBatchMetadata(1499-1512)GeminiBatchStats(1433-1437)GeminiBatchDest(1533-1536)GeminiBatchListResponseSDK(1627-1630)core/schemas/bifrost.go (5)
BifrostError(364-373)Gemini(48-48)BatchDeleteRequest(103-103)BifrostResponseExtraFields(295-304)RequestType(83-83)core/providers/utils/utils.go (4)
CheckOperationAllowed(408-420)NewBifrostOperationError(449-460)SetExtraHeaders(178-208)MakeRequestWithContext(39-93)
core/providers/mistral/batch.go (2)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
transports/bifrost-http/integrations/router.go (4)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)transports/bifrost-http/integrations/genai.go (1)
ErrResumableUploadInit(36-36)core/schemas/bifrost.go (1)
BatchDeleteRequest(103-103)core/schemas/provider.go (1)
Provider(313-362)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
🔇 Additional comments (25)
core/schemas/bifrost.go (1)
85-105:BatchDeleteRequestRequestType addition looks consistent.The new
BatchDeleteRequestenum value fits the existing batch request naming/ordering and aligns with the new batch delete schemas/providers.core/providers/bedrock/batch.go (1)
931-935: BedrockBatchDeletestub matches unsupported-operation pattern.This no-op implementation cleanly advertises that Bedrock doesn’t support batch delete, consistent with other unsupported operations.
core/providers/azure/batch.go (1)
437-441: AzureBatchDeleteunsupported stub is wired correctly.Signature and error behavior align with other Azure batch methods and the shared
NewUnsupportedOperationErrorhelper; good placeholder until Azure exposes delete.core/providers/sgl/batch.go (1)
35-38: SGLBatchDeletestub is consistent with other SGL batch methods.Extends the existing “not supported” pattern to the new BatchDelete operation; no additional changes needed.
core/providers/openai/batch.go (1)
597-600: OpenAIBatchDeletestub correctly advertises unsupported behavior.Implements the new interface method and cleanly returns a shared unsupported-operation error; matches patterns used across providers.
core/providers/mistral/batch.go (1)
35-38: MistralBatchDeletestub aligns with existing unsupported batch APIs.Consistent use of
NewUnsupportedOperationErrorfor the new request type; interface is satisfied without implying batch delete support.core/schemas/batch.go (1)
265-281: Batch delete request/response schemas look well-shaped.
BifrostBatchDeleteRequest/Responsefollow existing batch/file delete conventions (ID +deleted+ExtraFields) and give providers room viaExtraParams.transports/bifrost-http/integrations/router.go (3)
80-89: BatchRequest: DeleteRequest wiring looks consistentAdding
DeleteRequest *schemas.BifrostBatchDeleteRequestis consistent with other batch fields and enables the new delete path without changing existing behavior.
159-162: BatchDeleteResponseConverter integration is coherentThe new
BatchDeleteResponseConvertertype and its inclusion inRouteConfigfollow the existing pattern for other batch converters and keep the public surface symmetrical. No functional issues spotted.Also applies to: 271-301
822-843: BatchDelete branch mirrors other batch operations correctlyThe
schemas.BatchDeleteRequestcase validates input, callsg.client.BatchDeleteRequest, runsPostCallback, and runs/omitsBatchDeleteResponseConverterexactly in line with other batch cases. This is structurally sound.core/providers/gemini/batch.go (2)
846-868: Status mapping to SDK job states looks correct
ToGeminiJobStatecovers all knownschemas.BatchStatusvalues and maps them to the appropriateGeminiJobState*constants, with a sensible default toJOB_STATE_PENDING.
986-1043: Gemini BatchDelete implementation is aligned with other operations
BatchDeleteperforms standard gating (CheckOperationAllowed), validatesBatchID, builds the correct DELETE URL (with/withoutbatches/prefix), sends the request, and treats 200/204 as success, returning a populatedBifrostBatchDeleteResponsewith latency inExtraFields. This matches other provider operations and looks functionally solid.transports/bifrost-http/integrations/genai.go (10)
165-205: File route converters for Gemini look structurally correctThe new file endpoints (upload, resumable POST/PUT, list, retrieve, delete) correctly:
- Create SDK request types via
GetRequestTypeInstance.- Convert to
FileRequestwith provider frombifrostContextKeyProvider.- Use appropriate Bifrost request/response structs and converters.
This wiring matches existing router patterns and looks functionally sound.
Also applies to: 207-314
438-661: Batch route configs are coherent and align with new core Batch APIs*
CreateGenAIBatchRouteConfigswires SDK batch requests/responses to BifrostBatchCreate/List/Retrieve/Cancel/Deleteoperations consistently:
- Provider extracted via
bifrostContextKeyProvider.- Path patterns and methods match expected GenAI endpoints.
- Response converters support raw passthrough and SDK formatting.
The use of
BatchCreateRequestConverterfor all batch types is slightly misnamed but consistent with the existing router contract.
664-710: Batch create PreCallback correctly sets provider and model
extractGeminiBatchCreateParams:
- Derives provider from
x-model-provider(defaulting to Gemini) and sets it onbifrostCtx.- Extracts
modelfrom the URL and removes the:batchGenerateContentsuffix.This keeps Bifrost and SDK request fields aligned.
780-806: File upload entrypoint routing for resumable and multipart is sensible
parseGeminiFileUploadRequestdistinguishes:
- Resumable/multipart (
X-Goog-Upload-Protocol) →parseGeminiResumableUpload.multipart/*→parseGeminiMultipartUpload.- Fallback raw body →
parseGeminiRawUpload.This branching is clear and supports GenAI SDK patterns.
936-982: Multipart/form-data handling looks correct and preserves filenames
parseGeminiMultipartUpload:
- Reads
metadataJSON if present to setuploadReq.Filename.- Extracts required
filepart, reads full content, and uses the header filename as a fallback when metadata is absent.This is straightforward and seems robust.
984-1009: Raw body upload parsing is straightforward and safe
parseGeminiRawUploadcopies the full body intouploadReq.Fileand attempts to parse afilenamefromContent-Disposition. This is appropriate for non-multipart uploads.
1171-1192: File list PreCallback/provider extraction looks good
extractGeminiFileListQueryParams:
- Sets provider from
x-model-providerwith Gemini default.- Maps
pageSize/pageTokenquery params toGeminiFileListRequest.LimitandAfter.This matches the GenAI API semantics and is consistent with other extractors.
1195-1219: File retrieve PreCallback wiring is correct
extractGeminiFileRetrieveParams:
- Sets provider from header with Gemini default.
- Validates
file_idand assigns it into the SDK request struct.No issues found.
1221-1244: File delete PreCallback is consistent with retrieve
extractGeminiFileDeleteParamsmirrors the retrieve variant forfile_idand provider extraction and setsGeminiFileDeleteRequest.FileID. Looks correct.
1248-1255: NewGenAIRouter: inclusion of batch routes is correct
NewGenAIRouternow appends both file and batch route configs to the base GenAI routes. This is the expected integration point for the new batch features.core/providers/gemini/types.go (3)
1578-1590: SDK job-state constants align with Google GenAI semanticsThe
GeminiJobState*constants mirror documented SDK job states and are suitable targets forToGeminiJobState. This separation from internal batch states is clean.
1591-1651: SDK batch request/response types look compatible with GenAI SDKThe new SDK structs (
GeminiBatchCreateRequestSDK, inline request/config, job/list/list-request/retrieve/cancel/delete) reflect the expected shapes:
Srcpolymorphism for file vs inline.Namefields for resource IDs.BatchJobs/NextPageTokenfor list responses.They match how the GenAI router uses them and should serialize/deserialise correctly.
1655-1662: GeminiFileUploadRequest shape matches new transport and provider usageSwitching
GeminiFileUploadRequestto:
File []byte(non-JSON),Filename,MimeType,Purpose,ResumableSessionID(non-JSON, internal),is consistent with how
genai.goparsers populate uploads and how the Gemini provider expects to send files upstream. No issues spotted with this struct definition.
| if err := config.PreCallback(ctx, bifrostCtx, req); err != nil { | ||
| // Check if this is a resumable upload init that was already handled | ||
| if err == ErrResumableUploadInit { | ||
| // Response was already written by the PreCallback, just return | ||
| return | ||
| } | ||
| g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute pre-request callback: "+err.Error())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Call cancel() before early-returning on ErrResumableUploadInit
When PreCallback returns ErrResumableUploadInit, the handler returns without ever calling cancel(), leaving the cancellable context from ConvertToBifrostContext un-cancelled on this path. Even though no upstream call is made, it’s safer and more consistent to cancel here.
Consider:
if config.PreCallback != nil {
if err := config.PreCallback(ctx, bifrostCtx, req); err != nil {
// Check if this is a resumable upload init that was already handled
if err == ErrResumableUploadInit {
- // Response was already written by the PreCallback, just return
- return
+ // Response was already written by the PreCallback; clean up context and return
+ cancel()
+ return
}
g.sendError(ctx, bifrostCtx, config.ErrorConverter,
newBifrostError(err, "failed to execute pre-request callback: "+err.Error()))
return
}
}🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/router.go around lines 423 to 429, the
handler returns early when PreCallback yields ErrResumableUploadInit but fails
to call the cancel function created by ConvertToBifrostContext; update the error
branch so that you invoke cancel() before returning on ErrResumableUploadInit to
ensure the cancellable context is properly cleaned up, leaving the rest of the
error handling path unchanged.
db0afec to
4d7148c
Compare
72a73aa to
a6632ea
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (10)
transports/bifrost-http/integrations/router.go (2)
423-428: Ensure cancellable context is cleaned up on ErrResumableUploadInitWhen
PreCallbackreturnsErrResumableUploadInit, the handler returns without invokingcancel(), unlike other paths thatdefer cancel()before handing off to batch/file/inference handlers. While this path doesn’t call into Bifrost, it’s more consistent (and future-proof against PreCallbacks that might spawn work) to callcancel()before returning here.
870-895: Remove remaining fmt.Printf debug logging from file handlersThe
fmt.Printfdebug statements in the file upload path and file response error handling are still present and log directly to stdout, bypassing the structured logger (g.logger). These should either be removed or converted tog.logger.Debug(...)calls with appropriate context before this stack is merged.Also applies to: 1012-1013
core/providers/gemini/batch.go (1)
870-896: Still vulnerable to negative pending counts in BatchStats.The past review comment flagging negative
PendingRequestCountwhenresp.RequestCounts.Total == 0remains unaddressed. Lines 889-890, 915-917, and 954-956 all performTotal - Completedwithout defensive checks, which produces invalid negative counts when Bifrost populates onlyCompletedandFailedbut leavesTotalat zero.Apply the defensive math suggested in the previous review:
+ total := resp.RequestCounts.Total + if total == 0 { + total = resp.RequestCounts.Completed + resp.RequestCounts.Failed + } + pending := total - resp.RequestCounts.Completed + if pending < 0 { + pending = 0 + } + success := resp.RequestCounts.Completed - resp.RequestCounts.Failed + if success < 0 { + success = 0 + } + result.Metadata = &GeminiBatchMetadata{ Name: resp.ID, State: ToGeminiJobState(resp.Status), CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339), BatchStats: &GeminiBatchStats{ - RequestCount: resp.RequestCounts.Total, - PendingRequestCount: resp.RequestCounts.Total - resp.RequestCounts.Completed, - SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed, + RequestCount: total, + PendingRequestCount: pending, + SuccessfulRequestCount: success, }, }Mirror the same pattern in
ToGeminiBatchListResponse(lines 954-956).Also applies to: 898-933, 935-972
transports/bifrost-http/integrations/genai.go (7)
45-49: Unchecked error from crypto/rand.Read remains a security risk.The past review comment remains valid: ignoring the error from
crypto/rand.Readat line 47 could lead to predictable session IDs if the random source fails, creating a session hijacking vulnerability.Check the error and either return it to the caller or use a fallback:
func generateSessionID() string { bytes := make([]byte, 16) - rand.Read(bytes) + if _, err := rand.Read(bytes); err != nil { + return fmt.Sprintf("%d-%d", time.Now().UnixNano(), time.Now().Unix()) + } return hex.EncodeToString(bytes) }
52-67: Cleanup goroutine still lacks graceful shutdown.The past review comment remains unaddressed: the cleanup goroutine spawned in
init()runs indefinitely with no way to stop it, preventing clean application termination.Add a stop channel:
+var cleanupStop = make(chan struct{}) + func init() { go func() { ticker := time.NewTicker(10 * time.Minute) + defer ticker.Stop() - for range ticker.C { + for { + select { + case <-ticker.C: now := time.Now() uploadSessions.Range(func(key, value interface{}) bool { if session, ok := value.(*uploadSession); ok { if now.Sub(session.CreatedAt) > time.Hour { uploadSessions.Delete(key) } } return true }) + case <-cleanupStop: + return + } } }() }
438-502: Src field still silently ignores invalid types.The past review comment remains valid: the type switch at lines 464-482 silently ignores
Srcvalues that are neitherstringnor[]interface{}, leaving bothInputFileIDandRequestsempty. The downstream validation inBatchCreatethen returns a generic error that doesn't indicate the root cause.Add explicit validation:
// Handle src field - can be string (file reference) or array (inline requests) switch src := sdkReq.Src.(type) { case string: // File-based input: src="files/display_name" bifrostReq.InputFileID = strings.TrimPrefix(src, "files/") case []interface{}: // Inline requests: src=[{contents: [...], config: {...}}] requests := make([]schemas.BatchRequestItem, 0, len(src)) for i, item := range src { if itemMap, ok := item.(map[string]interface{}); ok { customID := fmt.Sprintf("request-%d", i) requests = append(requests, schemas.BatchRequestItem{ CustomID: customID, Body: itemMap, }) + } else { + return nil, fmt.Errorf("src array item %d must be an object, got %T", i, item) } } bifrostReq.Requests = requests + default: + return nil, errors.New("src must be a string (file reference) or array of request objects") }
867-934: Critical: Binary data corruption still present in multipart/related parsing.The past review comment remains unaddressed: converting the request body to a string at line 887 (
strings.Split(string(body), delimiter)) and then back to bytes at line 929 ([]byte(content)) corrupts any binary data containing null bytes or invalid UTF-8 sequences. This breaks all binary file uploads through the multipart/related endpoint.Use
bytes.Split()orbytes.Index()to preserve binary integrity:- // Split body by boundary delimiter := "--" + boundary - parts := strings.Split(string(body), delimiter) + delimiterBytes := []byte(delimiter) + // Use bytes operations to preserve binary data + var parts [][]byte + remaining := body + for { + idx := bytes.Index(remaining, delimiterBytes) + if idx == -1 { + if len(remaining) > 0 { + parts = append(parts, remaining) + } + break + } + parts = append(parts, remaining[:idx]) + remaining = remaining[idx+len(delimiterBytes):] + } for _, part := range parts { - part = strings.TrimSpace(part) - if part == "" || part == "--" { + part = bytes.TrimSpace(part) + if len(part) == 0 || bytes.Equal(part, []byte("--")) { continue } // Split headers from content - headerEnd := strings.Index(part, "\r\n\r\n") + headerEnd := bytes.Index(part, []byte("\r\n\r\n")) if headerEnd == -1 { - headerEnd = strings.Index(part, "\n\n") + headerEnd = bytes.Index(part, []byte("\n\n")) if headerEnd == -1 { continue } } - headers := part[:headerEnd] - content := part[headerEnd:] - content = strings.TrimPrefix(content, "\r\n\r\n") - content = strings.TrimPrefix(content, "\n\n") + headers := string(part[:headerEnd]) + content := part[headerEnd:] + content = bytes.TrimPrefix(content, []byte("\r\n\r\n")) + content = bytes.TrimPrefix(content, []byte("\n\n")) // Check content type of this part - headersLower := strings.ToLower(headers) + headersLower := strings.ToLower(string(headers)) if strings.Contains(headersLower, "application/json") { // This is metadata var metadata struct { File struct { DisplayName string `json:"display_name"` MimeType string `json:"mime_type"` } `json:"file"` } - if err := sonic.Unmarshal([]byte(content), &metadata); err == nil { + if err := sonic.Unmarshal(content, &metadata); err == nil { if metadata.File.DisplayName != "" { uploadReq.Filename = metadata.File.DisplayName } if metadata.File.MimeType != "" { uploadReq.MimeType = metadata.File.MimeType } } } else { // This is file content - keep as binary - uploadReq.File = []byte(content) + uploadReq.File = content } }
240-254: Remove debug fmt.Printf statements before merge.The past review comment remains valid: numerous
[DEBUG]print statements are scattered throughout this file at the annotated line ranges. These should be removed or replaced with proper structured logging before merging to production.If debug logging is needed, use the existing logger with configurable log levels instead of fmt.Printf.
Also applies to: 792-793, 813-814, 818-819, 824-824, 835-836, 841-841, 854-855, 860-861, 1001-1001, 1013-1013, 1022-1022, 1034-1034, 1046-1046, 1062-1063, 1072-1077, 1108-1108, 1125-1125, 1129-1129, 1147-1148
1056-1056: Hardcoded purpose may be incorrect for non-batch uploads.The past review comment remains valid: the purpose is hardcoded to
"batch"for all resumable file uploads at line 1056. If this endpoint is used for other purposes (e.g., general file storage for chat context), this default will be incorrect.Read the purpose from phase-1 metadata:
- uploadReq.Purpose = "batch" // Default purpose for file uploads via GenAI API + // Use purpose from session metadata if available, otherwise default to "batch" + if session.Purpose != "" { + uploadReq.Purpose = session.Purpose + } else { + uploadReq.Purpose = "batch" + }And store it during phase 1 in the session.
1138-1163: Upload URL construction ignores reverse proxy headers.The past review comment remains valid: the upload URL at lines 1140-1145 uses
ctx.Host()andctx.IsTLS()directly without checkingX-Forwarded-ProtoandX-Forwarded-Hostheaders. When Bifrost runs behind a reverse proxy, this produces an internal URL that causes the client's subsequent upload requests to fail.Follow the pattern from
session.go:- // Build the upload URL for phase 2 - // Use the request's host and scheme to build the URL - scheme := "http" - if ctx.IsTLS() { - scheme = "https" + // Determine scheme and host, respecting reverse proxy headers + scheme := "http" + if proto := string(ctx.Request.Header.Peek("X-Forwarded-Proto")); proto != "" { + if strings.Contains(proto, "https") { + scheme = "https" + } + } else if ctx.IsTLS() { + scheme = "https" + } + + host := string(ctx.Host()) + if forwardedHost := string(ctx.Request.Header.Peek("X-Forwarded-Host")); forwardedHost != "" { + host = forwardedHost } - host := string(ctx.Host()) uploadURL := fmt.Sprintf("%s://%s/genai/upload/v1beta/files/resumable/%s", scheme, host, uploadReq.ResumableSessionID)
🧹 Nitpick comments (1)
core/bifrost.go (1)
1133-1209: BatchDeleteRequest implementation is consistent with other batch APIsThe
BatchDeleteRequestmethod follows the same validation, config resolution, base-provider/key selection, and retry pattern asBatchCancelRequest/BatchRetrieveRequest, and correctly tagsExtraFieldson failure. The only noticeable duplication is the repeated provider/config/key boilerplate across batch methods; if this area evolves further, consider a small shared helper to reduce repetition, but it’s not blocking for this stack.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (25)
core/bifrost.go(1 hunks)core/providers/anthropic/batch.go(1 hunks)core/providers/azure/batch.go(1 hunks)core/providers/bedrock/batch.go(1 hunks)core/providers/cerebras/batch.go(1 hunks)core/providers/cohere/batch.go(1 hunks)core/providers/elevenlabs/batch.go(1 hunks)core/providers/gemini/batch.go(1 hunks)core/providers/gemini/files.go(1 hunks)core/providers/gemini/types.go(1 hunks)core/providers/groq/batch.go(1 hunks)core/providers/mistral/batch.go(1 hunks)core/providers/ollama/batch.go(1 hunks)core/providers/openai/batch.go(1 hunks)core/providers/openrouter/batch.go(1 hunks)core/providers/parasail/batch.go(1 hunks)core/providers/perplexity/batch.go(1 hunks)core/providers/sgl/batch.go(1 hunks)core/providers/vertex/batch.go(1 hunks)core/schemas/batch.go(1 hunks)core/schemas/bifrost.go(1 hunks)core/schemas/provider.go(3 hunks)transports/bifrost-http/integrations/genai.go(14 hunks)transports/bifrost-http/integrations/router.go(9 hunks)transports/bifrost-http/integrations/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (7)
- core/providers/bedrock/batch.go
- transports/bifrost-http/integrations/utils.go
- core/schemas/batch.go
- core/schemas/bifrost.go
- core/providers/parasail/batch.go
- core/providers/ollama/batch.go
- core/providers/perplexity/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/providers/sgl/batch.gocore/providers/openai/batch.gotransports/bifrost-http/integrations/router.gocore/providers/anthropic/batch.gocore/providers/gemini/batch.gocore/providers/mistral/batch.gocore/providers/openrouter/batch.gocore/schemas/provider.gocore/providers/elevenlabs/batch.gocore/providers/groq/batch.gocore/providers/gemini/files.gocore/providers/gemini/types.gocore/providers/cerebras/batch.gocore/providers/cohere/batch.gotransports/bifrost-http/integrations/genai.gocore/bifrost.gocore/providers/azure/batch.gocore/providers/vertex/batch.go
🧬 Code graph analysis (15)
core/providers/openai/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
transports/bifrost-http/integrations/router.go (4)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)transports/bifrost-http/integrations/genai.go (1)
ErrResumableUploadInit(36-36)core/schemas/bifrost.go (2)
BatchDeleteRequest(103-103)FileUploadRequest(104-104)core/schemas/provider.go (1)
Provider(313-362)
core/providers/anthropic/batch.go (4)
core/providers/anthropic/anthropic.go (1)
AnthropicProvider(21-28)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/gemini/batch.go (4)
core/schemas/batch.go (11)
BatchStatus(5-5)BatchStatusValidating(8-8)BatchStatusInProgress(10-10)BatchStatusFinalizing(11-11)BatchStatusCompleted(12-12)BatchStatusFailed(9-9)BatchStatusCancelling(14-14)BatchStatusCancelled(15-15)BatchStatusExpired(13-13)BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/providers/gemini/types.go (11)
GeminiJobStatePending(1582-1582)GeminiJobStateRunning(1583-1583)GeminiJobStateSucceeded(1584-1584)GeminiJobStateFailed(1585-1585)GeminiJobStateCancelling(1586-1586)GeminiJobStateCancelled(1587-1587)GeminiBatchJobResponseSDK(1618-1624)GeminiBatchMetadata(1499-1512)GeminiBatchStats(1433-1437)GeminiBatchDest(1533-1536)GeminiBatchListResponseSDK(1627-1630)core/providers/utils/utils.go (2)
SetExtraHeaders(178-208)MakeRequestWithContext(39-93)core/schemas/provider.go (1)
Provider(313-362)
core/providers/mistral/batch.go (3)
core/providers/mistral/mistral.go (1)
MistralProvider(17-22)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)
core/providers/openrouter/batch.go (4)
core/providers/openrouter/openrouter.go (1)
OpenRouterProvider(18-23)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/schemas/provider.go (2)
core/schemas/bifrost.go (1)
BatchDeleteRequest(103-103)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)
core/providers/elevenlabs/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/groq/batch.go (4)
core/providers/groq/groq.go (1)
GroqProvider(16-21)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/gemini/types.go (1)
core/schemas/models.go (1)
Model(109-129)
core/providers/cerebras/batch.go (4)
core/providers/cerebras/cerebras.go (1)
CerebrasProvider(16-21)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/cohere/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/bifrost.go (3)
core/schemas/bifrost.go (3)
BatchDeleteRequest(103-103)BifrostError(364-373)RequestType(83-83)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/provider.go (2)
Provider(313-362)CustomProviderConfig(248-254)
core/providers/azure/batch.go (4)
core/providers/azure/azure.go (1)
AzureProvider(22-27)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/vertex/batch.go (3)
core/providers/vertex/vertex.go (1)
VertexProvider(57-62)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
🔇 Additional comments (19)
core/providers/cerebras/batch.go (1)
35-38: LGTM! Consistent scaffolding for batch delete support.The BatchDelete stub follows the established pattern for unsupported batch operations in the Cerebras provider, correctly returning an unsupported operation error.
core/providers/elevenlabs/batch.go (1)
35-38: LGTM! Consistent implementation.The BatchDelete method correctly implements the unsupported operation pattern, maintaining consistency with other batch operations in the Elevenlabs provider.
core/providers/vertex/batch.go (1)
35-38: LGTM! Proper scaffolding for Vertex AI.The BatchDelete stub correctly follows the established pattern for unsupported batch operations in the Vertex AI provider.
core/providers/openrouter/batch.go (1)
35-38: LGTM! Consistent with OpenRouter's batch operation pattern.The BatchDelete stub properly returns an unsupported operation error, maintaining consistency with other batch methods.
core/providers/cohere/batch.go (1)
35-38: LGTM! Proper unsupported operation handling.The BatchDelete method correctly implements the unsupported operation pattern for the Cohere provider.
core/providers/azure/batch.go (1)
438-441: LGTM! Appropriate unsupported operation for Azure.The BatchDelete stub correctly returns an unsupported operation error. While Azure implements other batch operations, batch deletion is not yet supported, and this stub provides the appropriate interface implementation.
core/providers/groq/batch.go (1)
35-38: LGTM! Consistent implementation for Groq.The BatchDelete stub properly follows the established pattern for unsupported batch operations.
core/schemas/provider.go (3)
182-182: LGTM! Proper extension of AllowedRequests.The BatchDelete field correctly extends the AllowedRequests struct, enabling authorization control for batch delete operations.
231-232: LGTM! Correct switch case addition.The BatchDeleteRequest case properly maps to the BatchDelete field, maintaining consistency with other batch operation mappings.
350-351: LGTM! Well-integrated interface extension.The BatchDelete method addition to the Provider interface is properly documented and positioned. All providers in this PR now implement this interface method, ensuring consistent API surface across the stack.
core/providers/sgl/batch.go (1)
35-38: BatchDelete stub matches existing unsupported patternThe SGL
BatchDeleteimplementation correctly follows the established "unsupported operation" pattern used for the other batch methods on this provider; no issues from a core/stack perspective.core/providers/openai/batch.go (1)
597-600: OpenAI BatchDelete correctly surfaced as unsupportedThe OpenAI
BatchDeletemethod cleanly returnsNewUnsupportedOperationError, which is consistent with the rest of the stack given there is no upstream OpenAI batch delete API wired here.core/providers/mistral/batch.go (1)
35-38: Mistral BatchDelete unsupported implementation looks correctThe Mistral
BatchDeletemethod matches the existing unsupported Batch* pattern and is wired to the properBatchDeleteRequestrequest type; this aligns with the rest of the stack.core/providers/anthropic/batch.go (1)
783-786: Anthropic BatchDelete stub aligns with provider capabilitiesThe Anthropic
BatchDeletemethod correctly exposes the API surface while returning the standardized unsupported-operation error, which is appropriate given there is no Anthropic batch delete endpoint wired in this stack.transports/bifrost-http/integrations/router.go (3)
81-89: BatchRequest extended cleanly for delete operationsAdding
DeleteRequest *schemas.BifrostBatchDeleteRequesttoBatchRequestis a straightforward extension and keeps all batch shapes in one place; it fits well with the new delete path inhandleBatchRequest.
159-162: BatchDeleteResponseConverter wiring is consistent with existing batch convertersThe new
BatchDeleteResponseConvertertype and the correspondingRouteConfig.BatchDeleteResponseConverterfield mirror the pattern used for other batch responses, keeping integration-specific shaping out of the core router. This should make plugging Gemini/GenAI batch-delete SDK responses in genai.go straightforward without impacting other integrations.Also applies to: 277-291
822-842: BatchDelete routing matches other batch operationsThe new
schemas.BatchDeleteRequestbranch inhandleBatchRequestis structured identically to create/list/retrieve/cancel/results: it validatesDeleteRequest, callsBatchDeleteRequeston the Bifrost client, runsPostCallback, then applies the optionalBatchDeleteResponseConverter. This is the right place in the stack for the new delete capability and looks correct end‑to‑end.core/providers/gemini/batch.go (1)
986-1043: LGTM! BatchDelete implementation follows established patterns.The
BatchDeletemethod correctly mirrors the structure ofBatchCancel, including URL construction, header handling, error checking, and response formatting. Accepting both 200 and 204 status codes (line 1029) is appropriate for DELETE operations.core/providers/gemini/types.go (1)
1578-1651: LGTM! SDK batch types are well-structured.The new GenAI SDK job state constants (lines 1578-1589) and SDK batch types (lines 1591-1651) provide a clean separation between internal batch states and SDK-facing representations. The type definitions are clear and properly documented.
Note: The validation issue for the
Srcfield mentioned in past reviews is a transport-layer concern (ingenai.go) rather than a type definition issue, and has been flagged separately.
| // Uses snake_case field names to match Google's API format. | ||
| // GeminiFileUploadResponseWrapper is a wrapper that contains the file response for the upload API. | ||
| type GeminiFileUploadResponseWrapper struct { | ||
| File GeminiFileResponse `json:"file"` | ||
| } | ||
|
|
||
| func ToGeminiFileUploadResponse(resp *schemas.BifrostFileUploadResponse) *GeminiFileUploadResponseWrapper { | ||
| return &GeminiFileUploadResponseWrapper{ | ||
| File: GeminiFileResponse{ | ||
| Name: resp.ID, | ||
| DisplayName: resp.Filename, | ||
| MimeType: "application/octet-stream", | ||
| SizeBytes: fmt.Sprintf("%d", resp.Bytes), | ||
| CreateTime: formatGeminiTimestamp(resp.CreatedAt), | ||
| State: toGeminiFileState(resp.Status), | ||
| URI: resp.StorageURI, | ||
| ExpirationTime: formatGeminiTimestamp(safeDerefInt64(resp.ExpiresAt)), | ||
| }, | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gemini file upload wrapper matches expected SDK shape (minor doc nit)
The new GeminiFileUploadResponseWrapper and ToGeminiFileUploadResponse produce the expected { "file": { ... } } shape using GeminiFileResponse, which is consistent with the rest of the Gemini/GenAI integration. The comment about “snake_case field names” is slightly misleading given the struct tags are camelCase; consider updating the comment for accuracy when you next touch this code.
🤖 Prompt for AI Agents
In core/providers/gemini/files.go around lines 466 to 485, the top comment
incorrectly states “Uses snake_case field names to match Google's API format”
even though the struct uses JSON tags with camelCase; update the comment to
accurately describe the wrapper’s purpose and JSON shape (e.g., indicate it
produces a "file" wrapper with camelCase JSON tags matching the SDK/GenAI
integration) and keep the rest of the comment concise and aligned with the
implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (13)
transports/bifrost-http/integrations/utils.go (1)
188-197: Remove raw body debug logging fromsendSuccess.
fmt.Printfat Line 196 logs the full JSON response body to stdout, which is both noisy in production and risks leaking sensitive data. It was already flagged in a previous review.Recommend dropping the log entirely (or, if you really need it, switch to the structured logger and avoid logging full payloads):
responseBody, err := sonic.Marshal(response) @@ - ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody))) - ctx.SetBody(responseBody) - fmt.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d, body=%s\n", len(responseBody), string(responseBody)) + ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody))) + ctx.SetBody(responseBody)transports/bifrost-http/integrations/router.go (3)
864-895: Removefmt.Printfdebug logging from file upload path.The file‑upload handler still uses several
fmt.Printfdebug statements (Lines 870, 877, 881, 883, 885, 889, 892, 894). These bypass the router logger and will spam stdout in production; this was already called out in earlier reviews.If you still need diagnostics here, switch to
g.logger.Debug(...)with structured fields and avoid printing raw errors in multiple places; otherwise, just remove them:- fmt.Printf("[DEBUG] router: calling FileUploadRequest for provider=%s, purpose=%s, filename=%s\n", fileReq.UploadRequest.Provider, fileReq.UploadRequest.Purpose, fileReq.UploadRequest.Filename) @@ - errMsg := "unknown error" - if bifrostErr.Error != nil { - errMsg = bifrostErr.Error.Message - } - fmt.Printf("[DEBUG] router: FileUploadRequest error: %s (provider=%s)\n", errMsg, fileReq.UploadRequest.Provider) @@ - fmt.Printf("[DEBUG] router: FileUploadRequest success, response ID=%s\n", fileResponse.ID) @@ - fmt.Printf("[DEBUG] router: calling PostCallback\n") @@ - fmt.Printf("[DEBUG] router: PostCallback error: %v\n", err) @@ - fmt.Printf("[DEBUG] router: PostCallback success\n") @@ - fmt.Printf("[DEBUG] router: calling FileUploadResponseConverter\n") @@ - fmt.Printf("[DEBUG] router: FileUploadResponseConverter done, err=%v\n", err)
1011-1013: Remove finalfmt.Printfon file response conversion errors.The bottom of
handleFileRequeststill prints conversion errors viafmt.Printfbefore sending an error response, duplicating prior feedback and mixing stdout logging with the structured error path:- if err != nil { - fmt.Printf("[DEBUG] router: file response conversion error: %v\n", err) - g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to convert file response")) - return - } + if err != nil { + g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to convert file response")) + return + }
423-429: Callcancel()before early‑returning onErrResumableUploadInit.When
PreCallbackreturnsErrResumableUploadInit, the handler exits without invoking thecancelfunction returned byConvertToBifrostContext, so that cancellable context is never cleaned up on this path. This was already raised in a previous review.Recommend:
if config.PreCallback != nil { if err := config.PreCallback(ctx, bifrostCtx, req); err != nil { - // Check if this is a resumable upload init that was already handled - if err == ErrResumableUploadInit { - // Response was already written by the PreCallback, just return - return - } + // Check if this is a resumable upload init that was already handled + if err == ErrResumableUploadInit { + // Response was already written by the PreCallback; clean up context and return + cancel() + return + } g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute pre-request callback: "+err.Error())) return } }transports/bifrost-http/integrations/genai.go (8)
52-67: Provide a shutdown signal for the cleanup goroutineThe
initgoroutine withtime.NewTicker(10 * time.Minute)runs forever and has no way to be stopped for graceful shutdown; this can block process exit in some environments. Consider wiring a stop channel or context and stopping the ticker when shutting down the HTTP server, as previously suggested.
207-261: Resumable upload phase‑2 response converter still uses debugfmt.Printf
FileUploadResponseConverterfor the POST resumable endpoint logs multiple[DEBUG]messages viafmt.Printf, including dumping the converted response. These were previously flagged for removal or replacement with structured logging and will be noisy in production.You can either remove these debug prints entirely or switch to the existing logger with a concise message (no full JSON marshal of the response), and drop the
fmtusage if nothing else needs it.
442-488: ValidateSrctype in SDK batch create to avoid opaque downstream errors
GeminiBatchCreateRequestSDK.Srcis type‑switched only forstringand[]interface{}; other types (or non‑map items inside the slice) are silently ignored, so the provider later fails with a generic “either input_file_id or requests must be provided” error. This makes badsrcpayloads hard to debug.Consider rejecting unsupported shapes early:
- switch src := sdkReq.Src.(type) { + switch src := sdkReq.Src.(type) { case string: bifrostReq.InputFileID = strings.TrimPrefix(src, "files/") case []interface{}: // existing loop... + default: + return nil, fmt.Errorf("src must be a string (file reference) or array of request objects") }and, inside the
[]interface{}branch, validate that each element is amap[string]interface{}and return a clear error if not, instead of silently skipping it.
1011-1063: Resumable phase‑2 handler still has debug prints and hardcoded purpose
parseGeminiResumableUploadPhase2:
- Uses multiple
fmt.Printfdebug statements that should be removed or replaced with structured logging before production.- Hardcodes
uploadReq.Purpose = "batch", which may be incorrect if the resumable upload is reused for non‑batch purposes (prior feedback).Consider:
- Making
Purposepart of the phase‑1 metadata/session so it can be preserved here.- Dropping the debug prints or routing them through the existing logger at debug level with concise messages.
1066-1079: Post‑callback correctly marks upload as final; drop heavy debug logging
setResumableUploadFinalStatuscorrectly setsX-Goog-Upload-Status: final, but marshaling the full response viasonic.Marshaland printing it withfmt.Printfis expensive and noisy, and was previously flagged.You can keep just the header mutation and, if needed, log a short message via the structured logger without serializing the whole response.
45-48: Handlecrypto/rand.Readerror when generating session IDs
generateSessionIDignores the error fromrand.Read, which can theoretically produce predictable IDs if the random source fails. At minimum, check the error and fall back to a timestamp‑based ID (or log and fail fast) instead of silently proceeding.func generateSessionID() string { - bytes := make([]byte, 16) - rand.Read(bytes) - return hex.EncodeToString(bytes) + bytes := make([]byte, 16) + if _, err := rand.Read(bytes); err != nil { + // Very unlikely, but avoid silently using zero bytes. + // Fall back to a timestamp-based ID so it's still unique-ish. + return fmt.Sprintf("%d-%d", time.Now().UnixNano(), time.Now().UnixNano()) + } + return hex.EncodeToString(bytes) }
867-934: Fix binary data corruption in multipart/related parsing
parseGeminiMultipartRelatedcurrently does:
parts := strings.Split(string(body), delimiter)- Works with
partas astring- Assigns file content via
uploadReq.File = []byte(content)Converting arbitrary file bytes to
stringcan corrupt binary data (null bytes, invalid UTF‑8) and break uploads, as previously flagged.Refactor to operate purely on
[]byte:+ import "bytes" ... - delimiter := "--" + boundary - parts := strings.Split(string(body), delimiter) + delimiter := []byte("--" + boundary) + parts := bytes.Split(body, delimiter) ... - for _, part := range parts { - part = strings.TrimSpace(part) - if part == "" || part == "--" { + for _, partBytes := range parts { + partBytes = bytes.TrimSpace(partBytes) + if len(partBytes) == 0 || bytes.Equal(partBytes, []byte("--")) { continue } - headerEnd := strings.Index(part, "\r\n\r\n") + headerSep := []byte("\r\n\r\n") + headerEnd := bytes.Index(partBytes, headerSep) if headerEnd == -1 { - headerEnd = strings.Index(part, "\n\n") + headerSep = []byte("\n\n") + headerEnd = bytes.Index(partBytes, headerSep) if headerEnd == -1 { continue } } - - headers := part[:headerEnd] - content := part[headerEnd:] - content = strings.TrimPrefix(content, "\r\n\r\n") - content = strings.TrimPrefix(content, "\n\n") + headers := string(partBytes[:headerEnd]) + content := partBytes[headerEnd+len(headerSep):] ... - if strings.Contains(headersLower, "application/json") { + if strings.Contains(headersLower, "application/json") { // metadata - if err := sonic.Unmarshal([]byte(content), &metadata); err == nil { + if err := sonic.Unmarshal(content, &metadata); err == nil { ... } else { // This is file content - uploadReq.File = []byte(content) + uploadReq.File = append([]byte(nil), content...) }This preserves binary content without string round‑trips.
1115-1163: Support reverse‑proxy headers when constructing resumable upload URL
extractGeminiFileUploadParamsbuilds the resumable upload URL usingctx.IsTLS()andctx.Host()directly. Behind a reverse proxy, this will return the internal host/scheme, so the client receives an unusable URL. Prior review already called this out.You can mirror the pattern from
session.goand honorX-Forwarded-Proto/X-Forwarded-Host:- // Build the upload URL for phase 2 - // Use the request's host and scheme to build the URL - scheme := "http" - if ctx.IsTLS() { - scheme = "https" - } - host := string(ctx.Host()) + // Build the upload URL for phase 2, honoring reverse-proxy headers + scheme := "http" + if xfProto := strings.ToLower(string(ctx.Request.Header.Peek("X-Forwarded-Proto"))); xfProto != "" { + if strings.Contains(xfProto, "https") { + scheme = "https" + } + } else if ctx.IsTLS() { + scheme = "https" + } + host := string(ctx.Request.Header.Peek("X-Forwarded-Host")) + if host == "" { + host = string(ctx.Host()) + } uploadURL := fmt.Sprintf("%s://%s/genai/upload/v1beta/files/resumable/%s", scheme, host, uploadReq.ResumableSessionID)This makes the resumable URL usable for clients when Bifrost is fronted by a proxy.
core/providers/gemini/batch.go (1)
871-972: Defensively derive batch stats to avoid negative pending/success countsThe current stats math:
- Uses
RequestCount = resp.RequestCounts.Total- Computes
PendingRequestCount = Total - Completed- Computes
SuccessfulRequestCount = Completed - FailedIn
BifrostBatchRetrieveResponse(and sometimes list entries),RequestCounts.Totalis left at zero whileCompleted/Failedare non‑zero, soPendingRequestCountandSuccessfulRequestCountcan become negative, which is invalid for counts and can break SDK consumers. This matches the previously flagged issue.A more defensive pattern in all three converters (
ToGeminiBatchJobResponse,ToGeminiBatchRetrieveResponse,ToGeminiBatchListResponse) would be:- result.Metadata = &GeminiBatchMetadata{ - Name: resp.ID, - State: ToGeminiJobState(resp.Status), - CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339), - BatchStats: &GeminiBatchStats{ - RequestCount: resp.RequestCounts.Total, - PendingRequestCount: resp.RequestCounts.Total - resp.RequestCounts.Completed, - SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed, - }, - } + total := resp.RequestCounts.Total + if total == 0 { + total = resp.RequestCounts.Completed + resp.RequestCounts.Failed + } + if total < 0 { + total = 0 + } + success := resp.RequestCounts.Completed + if success < 0 { + success = 0 + } + failures := resp.RequestCounts.Failed + if failures < 0 { + failures = 0 + } + pending := total - (success + failures) + if pending < 0 { + pending = 0 + } + + result.Metadata = &GeminiBatchMetadata{ + Name: resp.ID, + State: ToGeminiJobState(resp.Status), + CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339), + BatchStats: &GeminiBatchStats{ + RequestCount: total, + PendingRequestCount: pending, + SuccessfulRequestCount: success, + }, + }and analogously for the list converter (using
batch.RequestCountsinstead ofresp.RequestCounts), soRequestCount ≥ 0,PendingRequestCount ≥ 0, andRequestCount ≈ success + failures + pendingalways hold.
🧹 Nitpick comments (2)
core/providers/gemini/files.go (1)
465-549: Gemini file SDK converters look consistent; optional MIME/URI enrichmentThe new
ToGeminiFileUploadResponse,ToGeminiFileListResponse, andToGeminiFileRetrieveResponsemappings are consistent with the Gemini SDK schema and reuseformatGeminiTimestamp/toGeminiFileState/safeDerefInt64correctly. The defaultMimeTypeand missingURIin list responses are acceptable given the currentBifrostFile*shapes, but if you later plumb MIME type and storage URI intoBifrostFileListResponse/BifrostFileRetrieveResponse, it would be worth populating those here too for parity with upload responses.transports/bifrost-http/integrations/genai.go (1)
1081-1112: Resumable upload param extraction and session cleanup look correct
extractGeminiResumableUploadParamssafely:
- Validates
session_idfrom the path.- Loads provider and original filename from
uploadSessions.- Deletes the session once consumed.
- Stashes provider and filename in
bifrostCtxfor downstream converters.Apart from the debug
fmt.Printf, this flow is sound and avoids session leaks.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (25)
core/bifrost.go(1 hunks)core/providers/anthropic/batch.go(1 hunks)core/providers/azure/batch.go(1 hunks)core/providers/bedrock/batch.go(1 hunks)core/providers/cerebras/batch.go(1 hunks)core/providers/cohere/batch.go(1 hunks)core/providers/elevenlabs/batch.go(1 hunks)core/providers/gemini/batch.go(1 hunks)core/providers/gemini/files.go(1 hunks)core/providers/gemini/types.go(1 hunks)core/providers/groq/batch.go(1 hunks)core/providers/mistral/batch.go(1 hunks)core/providers/ollama/batch.go(1 hunks)core/providers/openai/batch.go(1 hunks)core/providers/openrouter/batch.go(1 hunks)core/providers/parasail/batch.go(1 hunks)core/providers/perplexity/batch.go(1 hunks)core/providers/sgl/batch.go(1 hunks)core/providers/vertex/batch.go(1 hunks)core/schemas/batch.go(1 hunks)core/schemas/bifrost.go(1 hunks)core/schemas/provider.go(3 hunks)transports/bifrost-http/integrations/genai.go(14 hunks)transports/bifrost-http/integrations/router.go(9 hunks)transports/bifrost-http/integrations/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (13)
- core/providers/perplexity/batch.go
- core/providers/cerebras/batch.go
- core/providers/elevenlabs/batch.go
- core/providers/openai/batch.go
- core/providers/bedrock/batch.go
- core/providers/ollama/batch.go
- core/providers/vertex/batch.go
- core/providers/cohere/batch.go
- core/schemas/bifrost.go
- core/schemas/batch.go
- core/providers/openrouter/batch.go
- core/providers/groq/batch.go
- core/providers/anthropic/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/providers/mistral/batch.gotransports/bifrost-http/integrations/router.gocore/schemas/provider.gocore/providers/sgl/batch.gotransports/bifrost-http/integrations/genai.gotransports/bifrost-http/integrations/utils.gocore/providers/gemini/files.gocore/providers/gemini/types.gocore/providers/azure/batch.gocore/providers/parasail/batch.gocore/bifrost.gocore/providers/gemini/batch.go
🧬 Code graph analysis (9)
core/providers/mistral/batch.go (5)
core/providers/mistral/mistral.go (1)
MistralProvider(17-22)core/schemas/account.go (1)
Key(8-17)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/schemas/provider.go (2)
core/schemas/bifrost.go (1)
BatchDeleteRequest(103-103)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)
core/providers/sgl/batch.go (4)
core/providers/sgl/sgl.go (1)
SGLProvider(18-23)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
transports/bifrost-http/integrations/genai.go (5)
core/schemas/provider.go (1)
Provider(313-362)core/schemas/bifrost.go (11)
ModelProvider(32-32)Gemini(48-48)FileUploadRequest(104-104)FileListRequest(105-105)FileRetrieveRequest(106-106)FileDeleteRequest(107-107)BatchCreateRequest(98-98)BatchListRequest(99-99)BatchRetrieveRequest(100-100)BatchCancelRequest(101-101)BatchDeleteRequest(103-103)transports/bifrost-http/integrations/router.go (5)
EmbeddingResponseConverter(129-129)RequestParser(217-217)FileRequestConverter(105-105)FileRequest(92-99)BatchRequest(81-89)core/providers/gemini/types.go (10)
GeminiFileUploadRequest(1656-1662)Type(779-779)GeminiFileListRequest(1665-1669)GeminiFileRetrieveRequest(1672-1674)GeminiFileDeleteRequest(1677-1679)GeminiBatchCreateRequestSDK(1596-1602)GeminiBatchListRequestSDK(1633-1636)GeminiBatchRetrieveRequestSDK(1639-1641)GeminiBatchCancelRequestSDK(1644-1646)GeminiBatchDeleteRequestSDK(1649-1651)core/schemas/batch.go (10)
BifrostBatchCreateRequest(65-82)BatchRequestItem(31-37)BifrostBatchCreateResponse(85-109)BifrostBatchListRequest(112-126)BifrostBatchRetrieveRequest(143-149)BifrostBatchRetrieveResponse(152-187)BifrostBatchCancelRequest(190-196)BifrostBatchCancelResponse(199-208)BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)
core/providers/gemini/files.go (1)
core/schemas/files.go (1)
BifrostFileUploadResponse(89-105)
core/providers/gemini/types.go (1)
core/schemas/models.go (1)
Model(109-129)
core/providers/parasail/batch.go (3)
core/providers/parasail/parasail.go (1)
ParasailProvider(17-22)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)
core/bifrost.go (4)
core/schemas/bifrost.go (5)
BatchDeleteRequest(103-103)BifrostError(364-373)ErrorField(382-389)BifrostErrorExtraFields(431-435)RequestType(83-83)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/provider.go (1)
Provider(313-362)core/schemas/account.go (1)
Key(8-17)
core/providers/gemini/batch.go (2)
core/schemas/batch.go (7)
BatchStatus(5-5)BifrostBatchCreateResponse(85-109)BifrostBatchRetrieveResponse(152-187)BifrostBatchListResponse(129-140)BifrostBatchCancelResponse(199-208)BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/providers/gemini/types.go (11)
GeminiJobStatePending(1582-1582)GeminiJobStateRunning(1583-1583)GeminiJobStateSucceeded(1584-1584)GeminiJobStateFailed(1585-1585)GeminiJobStateCancelling(1586-1586)GeminiJobStateCancelled(1587-1587)GeminiBatchJobResponseSDK(1618-1624)GeminiBatchMetadata(1499-1512)GeminiBatchStats(1433-1437)GeminiBatchDest(1533-1536)GeminiBatchListResponseSDK(1627-1630)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
🔇 Additional comments (25)
core/providers/sgl/batch.go (1)
35-38:BatchDeletestub is consistent with other unsupported batch ops.The new
BatchDeletemethod cleanly mirrors the existing unsupported Batch* methods for SGL; interface coverage looks good.core/providers/azure/batch.go (1)
437-441: AzureBatchDeleteunsupported stub looks correct.Method signature and unsupported‑operation behavior are consistent with other Azure batch methods and the Provider interface.
core/schemas/provider.go (1)
161-183: BatchDelete wiring in schemas is consistent with existing batch APIs.Adding
BatchDeletetoAllowedRequests,IsOperationAllowed, and theProviderinterface cleanly extends the existing batch surface; it matches patterns for create/list/retrieve/cancel/results.Also applies to: 196-233, 340-352
core/providers/mistral/batch.go (1)
35-38: MistralBatchDeleteunsupported implementation matches existing pattern.The method correctly returns
NewUnsupportedOperationErrorwithBatchDeleteRequestand the provider key, consistent with other batch stubs.core/bifrost.go (1)
1133-1209:BatchDeleteRequestimplementation is consistent with other batch methods.Validation, config lookup, key selection, retry handling, and error
ExtraFieldsassignment all mirror the existing Batch* methods; API surface and behavior look coherent.core/providers/parasail/batch.go (1)
35-38: ParasailBatchDeletestub aligns with other unsupported batch operations.Implementation is straightforward and matches the existing unsupported Batch* methods for this provider.
transports/bifrost-http/integrations/router.go (3)
80-89: ExtendingBatchRequestwithDeleteRequestlooks correct.New
DeleteRequest *BifrostBatchDeleteRequestfield integrates cleanly with the existing batch wrapper; used only whenType == BatchDeleteRequest.
159-162: BatchDelete response converter wiring is consistent with existing batch converters.The
BatchDeleteResponseConvertertype and correspondingRouteConfig.BatchDeleteResponseConverterfield follow the same pattern as the other Batch* converters; no issues.Also applies to: 286-292
705-855: BatchDelete handling inhandleBatchRequestis aligned with other batch operations.The new
schemas.BatchDeleteRequestcase validatesDeleteRequest, callsBatchDeleteRequeston the client, runsPostCallback, and applies the optionalBatchDeleteResponseConverter, mirroring the other batch cases; behavior looks correct.Also applies to: 822-843
core/providers/gemini/batch.go (2)
847-868: Bifrost → Gemini job state mapping looks reasonable
ToGeminiJobStatecovers allschemas.BatchStatusvariants and degrades unknown values toJOB_STATE_PENDING. MappingBatchStatusExpiredtoJOB_STATE_FAILEDis a defensible choice given the SDK state set.
986-1043: BatchDelete implementation aligns with other Gemini batch operationsThe new
BatchDeletemethod correctly validatesbatch_id, builds the URL for bothbatches/{id}and already-prefixed IDs, sendsDELETE, and treats200/204as success while populating latency metadata. This matches the pattern inBatchCanceland the other batch methods.transports/bifrost-http/integrations/genai.go (12)
175-235: File upload request conversion and provider resolution look correctThe primary
/upload/v1beta/filesroute correctly convertsGeminiFileUploadRequestintoBifrostFileUploadRequest, resolves the provider from context, and prefersRawResponsewhen present. This matches the patterns used for other providers and keeps Gemini‑specific SDK wrapping in the provider layer.
263-314: PUT resumable continuation path mirrors POST semantics correctlyThe PUT resumable route shares the same request parser, provider resolution, and response shaping as the POST variant, which is useful for client compatibility. The “original filename from context” logic and final‑status post‑callback are wired consistently.
322-433: File list / retrieve / delete routes: conversions look consistentThe new Gemini file list, retrieve, and delete routes correctly:
- Read provider from
x-model-providerheader with Gemini as default.- Populate the
GeminiFile*Requesttypes from path/query params.- Convert to
BifrostFile*Requestand back to SDK‑style responses, returning an empty object on delete (matching Gemini behavior).No functional issues stand out here.
503-543: Batch list / retrieve / cancel route wiring matches core schemasThe new batch list, retrieve, and cancel routes:
- Correctly default provider to Gemini or take it from
x-model-provider.- Map SDK request types into the appropriate
BifrostBatch*Requeststructs.- Delegate response shaping to the Gemini converters, while honoring
RawResponsepassthrough.The path patterns and
PreCallbackhelpers (extractGeminiBatchListQueryParams,extractGeminiBatchIDFromPath,extractGeminiBatchIDFromPathCancel) are consistent with the underlying core behavior.
621-658: Batch delete route wiring is consistent with other batch endpointsThe DELETE
/v1beta/batches/{batch_id}route properly:
- Normalizes
batch_idviaextractGeminiBatchIDFromPath.- Wraps into
BifrostBatchDeleteRequest.- Returns an empty object on success, which matches Gemini’s behavior for delete.
This looks correct and aligned with the new provider‑level
BatchDelete.
664-744: Good reuse of provider extraction helpers for all batch endpointsThe
extractGeminiBatchCreateParams,extractGeminiBatchListQueryParams, andextractGeminiBatchIDFromPath*helpers consistently:
- Derive provider from
x-model-provider(defaulting to Gemini).- Normalize model or
batch_idpath parameters.- Populate the respective SDK request types in place.
This keeps transport concerns localized and avoids duplicating logic per route.
780-806: Resumable vs standard/multipart detection logic is reasonable
parseGeminiFileUploadRequestdistinguishes:
X-Goog-Upload-Protocolofresumable/multipart→ resumable handler.multipart/*→ standard multipart form handler.- Otherwise → raw body.
This routing matches the GenAI SDK patterns and should let both SDK and non‑SDK clients interoperate.
937-982: Multipart/form-data upload handler looks solid
parseGeminiMultipartUploadcorrectly:
- Parses optional JSON metadata for
displayName.- Extracts the
filepart fromctx.MultipartForm().- Reads exactly
fileHeader.Sizebytes into memory and falls back to the original filename whenmetadatais absent.No obvious correctness issues here.
985-1009: Raw body upload path is straightforward and safe
parseGeminiRawUploadvalidates non‑empty body, copies it intouploadReq.File, and attempts to derive a filename fromContent-Disposition. That’s a reasonable default path for simple clients and keeps the request body immutable.
1171-1245: File list/retrieve/delete param extractors are consistent and safeThe
extractGeminiFileListQueryParams,extractGeminiFileRetrieveParams, andextractGeminiFileDeleteParamshelpers all:
- Normalize provider from
x-model-provider.- Validate required path params (
file_id).- Populate the Gemini SDK request types in place.
These helpers are consistent with the batch equivalents and look correct.
1247-1252: GenAI router now composes batch, file, and core routes cleanly
NewGenAIRouternow appendsCreateGenAIBatchRouteConfigsafter the existing chat and file routes, which keeps the stack of GenAI endpoints cohesive. No routing conflicts are apparent given the distinct path prefixes.
812-865: Consider returning an error for invalid or missing metadata instead of silent failureWhen
sonic.Unmarshalfails to parse the metadata JSON, the function only logs a debug message and returnsnil, leavinguploadReqwith uninitialized fields. This defers the error detection to a downstream generic "file required" check, making diagnosis harder.Return an error explicitly when metadata parsing fails or when required fields (
DisplayName,MimeType) are missing, rather than proceeding with a partially initialized upload session.core/providers/gemini/types.go (2)
1578-1651: SDK batch job state and response types align with convertersThe added Google GenAI SDK job state constants and the
GeminiBatch*SDKtypes (GeminiBatchJobResponseSDK, list/create/list‑request/retrieve/cancel/delete) match how the transport and provider converters are using them. The separation between internal batch states (GeminiBatchState*) and SDK‑facing job states (GeminiJobState*) is clear and makes theToGeminiJobStatemapping straightforward.
1655-1662: Updated GeminiFileUploadRequest supports MIME type and resumable metadata cleanlyThe redesigned
GeminiFileUploadRequestwithFile,Filename,MimeType,Purpose, and internalResumableSessionIDmatches the new upload parsing logic ingenai.goand decouples this type from coreschemas. This should make it easier to evolve Gemini‑specific file features without touching core schemas.
4d7148c to
ee1a292
Compare
a6632ea to
a161f9c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (5)
transports/bifrost-http/integrations/utils.go (1)
194-196: Remove debug log that prints full response body.Logging the entire JSON response (
body=%s) risks leaking PII, API keys, and other sensitive content. Keep the explicitContent-Lengthheader, but drop this Printf (or log only non-body metadata via your structured logger).- ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody))) - ctx.SetBody(responseBody) - fmt.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d, body=%s\n", len(responseBody), string(responseBody)) + ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody))) + ctx.SetBody(responseBody)core/providers/gemini/files.go (1)
465-485: Fix outdated comment about snake_case vs actual JSON shape.The
GeminiFileUploadResponseWrapperuses camelCase JSON tags and wraps the response as{ "file": { ... } }, so the “snake_case field names” comment is misleading. Consider updating to something like:-// ToGeminiFileUploadResponse converts a Bifrost file upload response to Gemini format. -// Uses snake_case field names to match Google's API format. -// GeminiFileUploadResponseWrapper is a wrapper that contains the file response for the upload API. +// ToGeminiFileUploadResponse converts a Bifrost file upload response into the Gemini +// upload response shape: { "file": { ... } } with camelCase JSON field names. +// GeminiFileUploadResponseWrapper wraps the inner GeminiFileResponse for the upload API.transports/bifrost-http/integrations/router.go (2)
419-431: Callcancel()before early-returning from PreCallback, especially onErrResumableUploadInitWhen
PreCallbackreturnsErrResumableUploadInit, the handler returns without invoking thecancelfunction created byConvertToBifrostContext, leaving that context hanging. Same for any other error path here before you reach adefer cancel().Recommend:
if config.PreCallback != nil { if err := config.PreCallback(ctx, bifrostCtx, req); err != nil { // Check if this is a resumable upload init that was already handled if err == ErrResumableUploadInit { - // Response was already written by the PreCallback, just return - return + // Response was already written by the PreCallback; clean up context and return + cancel() + return } - g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute pre-request callback: "+err.Error())) - return + cancel() + g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute pre-request callback: "+err.Error())) + return } }This matches the intent from the earlier review and keeps context lifecycle consistent with the rest of the handler.
864-895: Remove or replace remainingfmt.Printfdebug logs in file handlersThe FileUpload and file-response error paths still use
fmt.Printffor debug output:
- Lines around 870–895: multiple
[DEBUG] router: ...prints.- Line ~1012:
"[DEBUG] router: file response conversion error".These were previously flagged and are inconsistent with the rest of the router’s structured logging.
Either drop them entirely or route them through
g.logger.Debug(...)with appropriate structured fields:- fmt.Printf("[DEBUG] router: calling FileUploadRequest for provider=%s, purpose=%s, filename=%s\n", ...) + g.logger.Debug("router file upload request", "provider", string(fileReq.UploadRequest.Provider), "purpose", fileReq.UploadRequest.Purpose, "filename", fileReq.UploadRequest.Filename) - fmt.Printf("[DEBUG] router: FileUploadRequest error: %s (provider=%s)\n", errMsg, fileReq.UploadRequest.Provider) + g.logger.Debug("router file upload error", "provider", string(fileReq.UploadRequest.Provider), "error", errMsg) - fmt.Printf("[DEBUG] router: FileUploadRequest success, response ID=%s\n", fileResponse.ID) + g.logger.Debug("router file upload success", "response_id", fileResponse.ID) - fmt.Printf("[DEBUG] router: file response conversion error: %v\n", err) + g.logger.Debug("router file response conversion error", "error", err)(or just remove them if you don’t need this verbosity).
Also applies to: 1011-1013
core/providers/gemini/batch.go (1)
847-896: Fix Gemini batch stats math to avoid negative pending and miscomputed successesThe three SDK converters reuse the same stats formula:
RequestCount = resp.RequestCounts.TotalPendingRequestCount = resp.RequestCounts.Total - resp.RequestCounts.CompletedSuccessfulRequestCount = resp.RequestCounts.Completed - resp.RequestCounts.FailedThis has two problems:
- For
BifrostBatchRetrieveResponse(and entries inBifrostBatchListResponse),RequestCounts.Totalis often0whileCompleted/Failedare populated, soPendingRequestCountbecomes negative.- Treating
SuccessfulRequestCountasCompleted - Failedis inconsistent with howRequestCountsare filled (in retrieve/results flowsCompletedalready represents successful completions;Failedis separate).That leads to invalid/negative stats in the Gemini SDK surfaces.
Consider centralizing a defensive derivation, e.g.:
+// deriveGeminiBatchStats normalizes batch counts to avoid negative or inconsistent values. +func deriveGeminiBatchStats(counts schemas.BatchRequestCounts) *GeminiBatchStats { + total := counts.Total + if total <= 0 { + // Fallback: infer total from observed outcomes + total = counts.Completed + counts.Failed + } + if total < 0 { + total = 0 + } + + success := counts.Completed + if success < 0 { + success = 0 + } + + pending := total - (success + counts.Failed) + if pending < 0 { + pending = 0 + } + + return &GeminiBatchStats{ + RequestCount: total, + PendingRequestCount: pending, + SuccessfulRequestCount: success, + } +}and then use it in all three converters:
- BatchStats: &GeminiBatchStats{ - RequestCount: resp.RequestCounts.Total, - PendingRequestCount: resp.RequestCounts.Total - resp.RequestCounts.Completed, - SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed, - }, + BatchStats: deriveGeminiBatchStats(resp.RequestCounts),Apply the same change in:
ToGeminiBatchJobResponse(BifrostBatchCreateResponse)ToGeminiBatchRetrieveResponseToGeminiBatchListResponseto ensure
RequestCount ≥ 0,PendingRequestCount ≥ 0, andRequestCount ≈ success + failures + pendingacross all SDK views.Also applies to: 899-933, 935-972
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (24)
core/bifrost.go(1 hunks)core/providers/anthropic/batch.go(1 hunks)core/providers/azure/batch.go(1 hunks)core/providers/bedrock/batch.go(1 hunks)core/providers/cerebras/batch.go(1 hunks)core/providers/cohere/batch.go(1 hunks)core/providers/elevenlabs/batch.go(1 hunks)core/providers/gemini/batch.go(1 hunks)core/providers/gemini/files.go(1 hunks)core/providers/gemini/types.go(1 hunks)core/providers/groq/batch.go(1 hunks)core/providers/mistral/batch.go(1 hunks)core/providers/ollama/batch.go(1 hunks)core/providers/openrouter/batch.go(1 hunks)core/providers/parasail/batch.go(1 hunks)core/providers/perplexity/batch.go(1 hunks)core/providers/sgl/batch.go(1 hunks)core/providers/vertex/batch.go(1 hunks)core/schemas/batch.go(1 hunks)core/schemas/bifrost.go(1 hunks)core/schemas/provider.go(3 hunks)transports/bifrost-http/integrations/genai.go(14 hunks)transports/bifrost-http/integrations/router.go(9 hunks)transports/bifrost-http/integrations/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (8)
- core/providers/vertex/batch.go
- core/schemas/batch.go
- core/providers/anthropic/batch.go
- core/providers/elevenlabs/batch.go
- core/providers/ollama/batch.go
- core/providers/cohere/batch.go
- core/providers/cerebras/batch.go
- core/providers/groq/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/providers/openrouter/batch.gocore/providers/sgl/batch.gotransports/bifrost-http/integrations/utils.gocore/providers/perplexity/batch.gocore/providers/bedrock/batch.gocore/providers/mistral/batch.gocore/schemas/provider.gocore/schemas/bifrost.gocore/bifrost.gocore/providers/azure/batch.gocore/providers/gemini/batch.gotransports/bifrost-http/integrations/genai.gocore/providers/gemini/files.gotransports/bifrost-http/integrations/router.gocore/providers/parasail/batch.gocore/providers/gemini/types.go
🧬 Code graph analysis (11)
core/providers/openrouter/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/perplexity/batch.go (4)
core/providers/perplexity/perplexity.go (1)
PerplexityProvider(19-24)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/bedrock/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/mistral/batch.go (3)
core/providers/mistral/mistral.go (1)
MistralProvider(17-22)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)
core/schemas/provider.go (3)
core/schemas/bifrost.go (2)
BatchDeleteRequest(103-103)BifrostError(364-373)core/schemas/account.go (1)
Key(8-17)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)
core/schemas/bifrost.go (1)
ui/lib/types/config.ts (1)
RequestType(122-134)
core/bifrost.go (4)
core/schemas/bifrost.go (3)
BatchDeleteRequest(103-103)BifrostError(364-373)RequestType(83-83)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/provider.go (2)
Provider(313-362)CustomProviderConfig(248-254)core/schemas/account.go (1)
Key(8-17)
core/providers/gemini/batch.go (2)
core/schemas/batch.go (11)
BatchStatus(5-5)BatchStatusValidating(8-8)BatchStatusInProgress(10-10)BatchStatusFinalizing(11-11)BatchStatusCompleted(12-12)BatchStatusFailed(9-9)BatchStatusCancelling(14-14)BatchStatusCancelled(15-15)BatchStatusExpired(13-13)BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/providers/gemini/types.go (11)
GeminiJobStatePending(1582-1582)GeminiJobStateRunning(1583-1583)GeminiJobStateSucceeded(1584-1584)GeminiJobStateFailed(1585-1585)GeminiJobStateCancelling(1586-1586)GeminiJobStateCancelled(1587-1587)GeminiBatchJobResponseSDK(1618-1624)GeminiBatchMetadata(1499-1512)GeminiBatchStats(1433-1437)GeminiBatchDest(1533-1536)GeminiBatchListResponseSDK(1627-1630)
core/providers/gemini/files.go (1)
core/schemas/files.go (1)
BifrostFileUploadResponse(89-105)
core/providers/parasail/batch.go (4)
core/providers/parasail/parasail.go (1)
ParasailProvider(17-22)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/gemini/types.go (1)
core/schemas/models.go (1)
Model(109-129)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Graphite / mergeability_check
🔇 Additional comments (16)
core/schemas/bifrost.go (1)
85-109: BatchDeleteRequest enum extension looks consistent.Adding
BatchDeleteRequest = "batch_delete"here keeps the RequestType enum aligned with the new batch delete flow and the JSON-facing string values. No issues from schema/API consistency perspective.core/providers/gemini/types.go (1)
1578-1651: SDK batch/job types and updated file upload request look internally consistent.The added JOB_STATE_* constants, SDK batch types, and the extended
GeminiFileUploadRequest(withMimeTypeandResumableSessionID) align with the existing Gemini type patterns and should integrate cleanly with the new batch/delete and resumable upload flows. Please just make sure these shapes are exercised in your GenAI integration tests against the real SDK/API.Also applies to: 1655-1662
core/providers/openrouter/batch.go (1)
35-38: OpenRouter BatchDelete stub matches existing unsupported pattern.
BatchDeletecorrectly returnsNewUnsupportedOperationErrorforBatchDeleteRequest, consistent with the other Batch* stubs in this provider and with the new Provider interface.core/providers/bedrock/batch.go (1)
1063-1067: Bedrock BatchDelete explicitly marked unsupported.Adding
BatchDeletehere with aNewUnsupportedOperationErrorcleanly completes the Provider interface for Bedrock and correctly reflects that Bedrock doesn’t expose a batch delete operation.core/providers/perplexity/batch.go (1)
35-38: Perplexity BatchDelete stub is consistent with other unsupported batch methods.Returning
NewUnsupportedOperationErrorforBatchDeleteRequestmatches the existing pattern for Perplexity’s unsupported batch operations.core/schemas/provider.go (1)
182-188: BatchDelete wiring in schemas/provider looks correct and complete.Adding
BatchDeletetoAllowedRequests, teachingIsOperationAllowedaboutBatchDeleteRequest, and extending theProviderinterface with aBatchDeletemethod together give BatchDelete first-class treatment alongside the other batch ops. This matches the new BatchDelete request/response types and the provider stubs/implementations in this stack.Also applies to: 231-233, 350-351
core/providers/sgl/batch.go (1)
35-38: SGL BatchDelete stub is consistent with existing unsupported batch operationsSignature and error pattern match the other Batch* stubs for SGL; no issues.
core/providers/mistral/batch.go (1)
35-38: Mistral BatchDelete unsupported stub looks correctImplements the interface and uses the shared unsupported-operation helper consistently with other Batch* methods.
core/providers/parasail/batch.go (1)
35-38: Parasail BatchDelete stub is aligned with existing batch behaviorMatches the other unsupported Batch* implementations for Parasail; nothing concerning.
core/providers/azure/batch.go (1)
438-441: Azure BatchDelete correctly wired as unsupportedThe stub method matches the Provider interface and returns a standardized unsupported-operation error; consistent with “not yet implemented” behavior.
core/bifrost.go (1)
1133-1209: BatchDeleteRequest core API mirrors existing Batch patterns*Validation, provider/config lookup, key handling, and retry wrapping are consistent with the other batch methods; error ExtraFields are set correctly. Looks good.
core/providers/gemini/batch.go (1)
986-1043: Gemini BatchDelete implementation is consistent with other batch operationsOperation-allowance check, URL construction, auth headers, status handling (200/204), and response shaping all mirror the existing Gemini batch methods. This looks solid.
transports/bifrost-http/integrations/router.go (1)
81-89: BatchDelete is cleanly wired through the generic routerThe new
DeleteRequestfield,BatchDeleteResponseConvertertype/RouteConfig slot, and theschemas.BatchDeleteRequestbranch inhandleBatchRequestall mirror the existing batch operations. Error handling and PostCallback integration are consistent. No issues here.Also applies to: 159-162, 271-301, 822-843
transports/bifrost-http/integrations/genai.go (3)
663-778: LGTM: Batch parameter extraction functions are well-structured.The batch parameter extractors follow consistent patterns with the existing file upload extractors. The defensive normalization of batch IDs (ensuring "batches/" prefix) and clear error messages for missing/invalid parameters are good practices.
1247-1256: LGTM: Router initialization correctly integrates batch routes.The addition of batch route configurations at line 1251 properly extends the GenAI router with batch API support, following the established pattern for file routes.
438-661: Stack context: Batch API integration aligns with core batch delete feature.Based on the coding guidelines to consider the full stack, this transport-layer batch API implementation properly integrates with the core batch delete functionality introduced in the stack (PR #1001 building on PR #955). The route configurations, converters, and parameter extractors follow consistent patterns and correctly bridge Gemini SDK requests to Bifrost internal schemas.
Note: Several critical and major issues flagged in previous reviews (binary corruption in multipart parsing, missing error handling in
generateSessionID, reverse proxy header support, cleanup goroutine shutdown) remain unaddressed and should be resolved before merging the stack.
| // Handle src field - can be string (file reference) or array (inline requests) | ||
| switch src := sdkReq.Src.(type) { | ||
| case string: | ||
| // File-based input: src="files/display_name" | ||
| // TrimPrefix is safe even if prefix doesn't exist | ||
| bifrostReq.InputFileID = strings.TrimPrefix(src, "files/") | ||
| case []interface{}: | ||
| // Inline requests: src=[{contents: [...], config: {...}}] | ||
| requests := make([]schemas.BatchRequestItem, 0, len(src)) | ||
| for i, item := range src { | ||
| if itemMap, ok := item.(map[string]interface{}); ok { | ||
| customID := fmt.Sprintf("request-%d", i) | ||
| requests = append(requests, schemas.BatchRequestItem{ | ||
| CustomID: customID, | ||
| Body: itemMap, | ||
| }) | ||
| } | ||
| } | ||
| bifrostReq.Requests = requests | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a default case to handle unexpected src types.
The switch statement on sdkReq.Src handles string and []interface{} cases, but silently ignores any other type. If Src is neither (e.g., due to malformed input or SDK changes), both InputFileID and Requests remain unset, which may cause confusing downstream errors or silent failures.
Apply this diff to explicitly handle unexpected types:
switch src := sdkReq.Src.(type) {
case string:
// File-based input: src="files/display_name"
// TrimPrefix is safe even if prefix doesn't exist
bifrostReq.InputFileID = strings.TrimPrefix(src, "files/")
case []interface{}:
// Inline requests: src=[{contents: [...], config: {...}}]
requests := make([]schemas.BatchRequestItem, 0, len(src))
for i, item := range src {
if itemMap, ok := item.(map[string]interface{}); ok {
customID := fmt.Sprintf("request-%d", i)
requests = append(requests, schemas.BatchRequestItem{
CustomID: customID,
Body: itemMap,
})
}
}
bifrostReq.Requests = requests
+ default:
+ return nil, fmt.Errorf("invalid src type: expected string or array, got %T", src)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Handle src field - can be string (file reference) or array (inline requests) | |
| switch src := sdkReq.Src.(type) { | |
| case string: | |
| // File-based input: src="files/display_name" | |
| // TrimPrefix is safe even if prefix doesn't exist | |
| bifrostReq.InputFileID = strings.TrimPrefix(src, "files/") | |
| case []interface{}: | |
| // Inline requests: src=[{contents: [...], config: {...}}] | |
| requests := make([]schemas.BatchRequestItem, 0, len(src)) | |
| for i, item := range src { | |
| if itemMap, ok := item.(map[string]interface{}); ok { | |
| customID := fmt.Sprintf("request-%d", i) | |
| requests = append(requests, schemas.BatchRequestItem{ | |
| CustomID: customID, | |
| Body: itemMap, | |
| }) | |
| } | |
| } | |
| bifrostReq.Requests = requests | |
| } | |
| // Handle src field - can be string (file reference) or array (inline requests) | |
| switch src := sdkReq.Src.(type) { | |
| case string: | |
| // File-based input: src="files/display_name" | |
| // TrimPrefix is safe even if prefix doesn't exist | |
| bifrostReq.InputFileID = strings.TrimPrefix(src, "files/") | |
| case []interface{}: | |
| // Inline requests: src=[{contents: [...], config: {...}}] | |
| requests := make([]schemas.BatchRequestItem, 0, len(src)) | |
| for i, item := range src { | |
| if itemMap, ok := item.(map[string]interface{}); ok { | |
| customID := fmt.Sprintf("request-%d", i) | |
| requests = append(requests, schemas.BatchRequestItem{ | |
| CustomID: customID, | |
| Body: itemMap, | |
| }) | |
| } | |
| } | |
| bifrostReq.Requests = requests | |
| default: | |
| return nil, fmt.Errorf("invalid src type: expected string or array, got %T", src) | |
| } |
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/genai.go around lines 463 to 482, the
switch over sdkReq.Src handles only string and []interface{} and silently
ignores other types; add a default case that detects unexpected types, logs the
actual runtime type (e.g., fmt.Sprintf("%T", sdkReq.Src)), and returns or
propagates a clear error (or sets an error on the response path used in this
function) so malformed/unknown src values fail fast with a descriptive message
instead of causing downstream silent failures.
ee1a292 to
3cdb5b9
Compare
9c92a1e to
d69d5e9
Compare
3cdb5b9 to
2ea6dcc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (17)
transports/bifrost-http/integrations/utils.go (1)
194-196: Remove debug print of full HTTP response body fromsendSuccess.
fmt.Printfon Line 196 logs the entire JSON response body, which can leak PII, secrets, and large payloads into logs and is hard to control in production. Given this is a generic router utility, this is a security/privacy risk and should be removed or replaced with a metadata‑only log (status, content length, maybe request id), without including the body.Minimal safe change:
- ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody))) - ctx.SetBody(responseBody) - fmt.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d, body=%s\n", len(responseBody), string(responseBody)) + ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody))) + ctx.SetBody(responseBody)If you still want debug logging, prefer using the structured logger and log only non‑sensitive fields.
core/providers/gemini/files.go (1)
466-485: Gemini upload wrapper/mapping looks correct (update stale comment)The new
GeminiFileUploadResponseWrapperandToGeminiFileUploadResponsecorrectly produce a{ "file": { ... } }payload with the expected Gemini fields, and the hard‑coded MIME type is reasonable given the current Bifrost response shape. The top comment about “snake_case field names” is now misleading since the JSON tags are camelCase; consider rephrasing it to describe the wrapper shape instead.-// Uses snake_case field names to match Google's API format. -// GeminiFileUploadResponseWrapper is a wrapper that contains the file response for the upload API. +// GeminiFileUploadResponseWrapper wraps the Gemini file object in a top-level "file" field +// to match the upload API response shape.transports/bifrost-http/integrations/router.go (3)
424-427: Callcancel()when PreCallback short-circuits with ErrResumableUploadInitWhen
PreCallbackreturnsErrResumableUploadInit, the handler returns without ever calling thecancelfunction created byConvertToBifrostContext, leaving that cancellable context un-cancelled on this path. Even if no upstream call is made, it’s safer and more consistent with the rest of the handler to cancel here.- if err := config.PreCallback(ctx, bifrostCtx, req); err != nil { - // Check if this is a resumable upload init that was already handled - if err == ErrResumableUploadInit { - // Response was already written by the PreCallback, just return - return - } + if err := config.PreCallback(ctx, bifrostCtx, req); err != nil { + // Check if this is a resumable upload init that was already handled + if err == ErrResumableUploadInit { + // Response was already written by the PreCallback; clean up context and return + cancel() + return + } g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute pre-request callback: "+err.Error())) return }
870-895: Removefmt.Printfdebug logs from file upload pathThese
fmt.Printfcalls in the file upload handler bypass the structured logger and will spam stdout in production. Prior review already requested their removal or replacement withg.logger.Debug(...).- fmt.Printf("[DEBUG] router: calling FileUploadRequest for provider=%s, purpose=%s, filename=%s\n", fileReq.UploadRequest.Provider, fileReq.UploadRequest.Purpose, fileReq.UploadRequest.Filename) fileResponse, bifrostErr := g.client.FileUploadRequest(requestCtx, fileReq.UploadRequest) if bifrostErr != nil { - errMsg := "unknown error" - if bifrostErr.Error != nil { - errMsg = bifrostErr.Error.Message - } - fmt.Printf("[DEBUG] router: FileUploadRequest error: %s (provider=%s)\n", errMsg, fileReq.UploadRequest.Provider) g.sendError(ctx, bifrostCtx, config.ErrorConverter, bifrostErr) return } - fmt.Printf("[DEBUG] router: FileUploadRequest success, response ID=%s\n", fileResponse.ID) if config.PostCallback != nil { - fmt.Printf("[DEBUG] router: calling PostCallback\n") if err := config.PostCallback(ctx, req, fileResponse); err != nil { - fmt.Printf("[DEBUG] router: PostCallback error: %v\n", err) g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute post-request callback")) return } - fmt.Printf("[DEBUG] router: PostCallback success\n") } if config.FileUploadResponseConverter != nil { - fmt.Printf("[DEBUG] router: calling FileUploadResponseConverter\n") response, err = config.FileUploadResponseConverter(bifrostCtx, fileResponse) - fmt.Printf("[DEBUG] router: FileUploadResponseConverter done, err=%v\n", err) } else { response = fileResponse }If you still need debug visibility here, swap these for
g.logger.Debugcalls with structured fields rather than rawfmt.Printf.
1012-1013: Drop finalfmt.Printfin file response error handlingThis final
fmt.Printflog on file response conversion error is another direct stdout write that should be removed or routed through the logger; it was already flagged in a prior review.- if err != nil { - fmt.Printf("[DEBUG] router: file response conversion error: %v\n", err) - g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to convert file response")) - return - } + if err != nil { + g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to convert file response")) + return + }core/providers/gemini/batch.go (3)
870-896: Fix BatchStats computation to avoid negative counts.The current stats math at lines 887-890 can produce invalid negative values:
- When
Total == 0butCompleted > 0,PendingRequestCount = Total - Completedbecomes negative.- When
Completed < Failed,SuccessfulRequestCount = Completed - Failedbecomes negative.Apply defensive arithmetic as suggested in the prior review to ensure all counts remain non-negative and Total is derived when zero.
898-933: Fix BatchStats computation to avoid negative counts.The same arithmetic issue from
ToGeminiBatchJobResponseapplies here at lines 914-917. Apply the same defensive computation pattern.
935-972: Fix BatchStats computation to avoid negative counts.The same arithmetic issue exists in the loop at lines 953-956. Apply the defensive computation pattern to each batch item.
transports/bifrost-http/integrations/genai.go (9)
45-49: Handle error fromcrypto/rand.Readfor security.Line 47 ignores the error from
rand.Read, which could lead to predictable session IDs if the random source fails. This is a security concern for session hijacking.
52-67: Provide a mechanism to stop the cleanup goroutine.The cleanup goroutine runs indefinitely with no way to stop it, preventing graceful shutdown. Add a context or stop channel to allow clean termination.
463-482: Add a default case to handle unexpectedSrctypes.The switch on
sdkReq.Srchandlesstringand[]interface{}but silently ignores other types. WhenSrcis neither (e.g., due to malformed input), bothInputFileIDandRequestsremain unset, causing confusing downstream errors.
792-793: Remove debugfmt.Printfstatements.Multiple
[DEBUG]print statements are present throughout the file upload parsing functions. Remove these before merging or replace with structured logging.Also applies to: 814-814, 824-824, 835-836, 841-841, 854-855, 860-860
867-934: Critical: Binary data corruption in multipart/related parsing.Line 887 converts the request body to a string (
strings.Split(string(body), delimiter)), then line 929 converts it back to bytes (uploadReq.File = []byte(content)). Go strings are UTF-8 encoded, so any binary data containing null bytes or invalid UTF-8 sequences will be corrupted.Use
bytes.Split()instead ofstrings.Split()to preserve binary data integrity.
1056-1056: Hardcoded purpose may be incorrect for non-batch uploads.The purpose is hardcoded to
"batch"at line 1056. If this endpoint is used for other purposes (e.g., general file storage), this default may be incorrect. Consider reading the purpose from phase-1 metadata.
1001-1001: Remove debugfmt.Printfstatements.Additional
[DEBUG]print statements in phase-2 upload functions should be removed or replaced with structured logging.Also applies to: 1013-1013, 1022-1022, 1034-1034, 1046-1046, 1062-1063, 1072-1077
1138-1163: Support reverse proxy headers for URL construction.Lines 1140-1145 use
ctx.Host()andctx.IsTLS()directly without checkingX-Forwarded-ProtoandX-Forwarded-Hostheaders. When Bifrost runs behind a reverse proxy, this URL will reference internal addresses, causing client requests to fail.
1108-1108: Remove debugfmt.Printfstatements.More
[DEBUG]print statements in parameter extraction functions should be removed or replaced with structured logging.Also applies to: 1125-1125, 1129-1129, 1147-1148
🧹 Nitpick comments (1)
transports/bifrost-http/integrations/router.go (1)
277-291: RouteConfig batch delete hooks are consistent (name nit only)
BatchDeleteResponseConverteronRouteConfigslots cleanly into the existing batch converter set and is optional like the others. TheBatchCreateRequestConvertername is a bit misleading now that it’s used for all batch operations (including delete); if you touch this again, consider renaming it toBatchRequestConverterfor clarity.- BatchCreateRequestConverter BatchRequestConverter // Function to convert request to BatchRequest (for batch operations) + BatchRequestConverter BatchRequestConverter // Function to convert request to BatchRequest (for batch operations)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (24)
core/bifrost.go(1 hunks)core/providers/anthropic/batch.go(1 hunks)core/providers/azure/batch.go(1 hunks)core/providers/bedrock/batch.go(1 hunks)core/providers/cerebras/batch.go(1 hunks)core/providers/cohere/batch.go(1 hunks)core/providers/elevenlabs/batch.go(1 hunks)core/providers/gemini/batch.go(1 hunks)core/providers/gemini/files.go(1 hunks)core/providers/gemini/types.go(1 hunks)core/providers/groq/batch.go(1 hunks)core/providers/mistral/batch.go(1 hunks)core/providers/ollama/batch.go(1 hunks)core/providers/openrouter/batch.go(1 hunks)core/providers/parasail/batch.go(1 hunks)core/providers/perplexity/batch.go(1 hunks)core/providers/sgl/batch.go(1 hunks)core/providers/vertex/batch.go(1 hunks)core/schemas/batch.go(1 hunks)core/schemas/bifrost.go(1 hunks)core/schemas/provider.go(3 hunks)transports/bifrost-http/integrations/genai.go(14 hunks)transports/bifrost-http/integrations/router.go(9 hunks)transports/bifrost-http/integrations/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
- core/providers/bedrock/batch.go
- core/providers/parasail/batch.go
- core/providers/groq/batch.go
- core/providers/vertex/batch.go
- core/schemas/bifrost.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/providers/elevenlabs/batch.gotransports/bifrost-http/integrations/utils.gocore/schemas/batch.gocore/providers/anthropic/batch.gocore/providers/mistral/batch.gocore/bifrost.gocore/providers/cohere/batch.gocore/providers/sgl/batch.gocore/providers/cerebras/batch.gocore/providers/gemini/files.gocore/providers/perplexity/batch.gocore/providers/gemini/batch.gocore/providers/azure/batch.gocore/providers/openrouter/batch.gotransports/bifrost-http/integrations/genai.gocore/schemas/provider.gotransports/bifrost-http/integrations/router.gocore/providers/gemini/types.gocore/providers/ollama/batch.go
🧬 Code graph analysis (13)
core/providers/elevenlabs/batch.go (4)
core/providers/elevenlabs/elevenlabs.go (1)
ElevenlabsProvider(23-29)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/schemas/batch.go (2)
core/schemas/provider.go (1)
Provider(313-362)core/schemas/bifrost.go (2)
ModelProvider(32-32)BifrostResponseExtraFields(295-304)
core/providers/anthropic/batch.go (4)
core/providers/anthropic/anthropic.go (1)
AnthropicProvider(21-28)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/mistral/batch.go (4)
core/providers/mistral/mistral.go (1)
MistralProvider(17-22)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/bifrost.go (4)
core/schemas/bifrost.go (4)
BatchDeleteRequest(103-103)BifrostError(364-373)BifrostErrorExtraFields(431-435)RequestType(83-83)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/provider.go (1)
Provider(313-362)core/schemas/account.go (1)
Key(8-17)
core/providers/cohere/batch.go (4)
core/providers/cohere/cohere.go (1)
CohereProvider(67-73)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/cerebras/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/gemini/files.go (1)
core/schemas/files.go (1)
BifrostFileUploadResponse(89-105)
core/providers/gemini/batch.go (4)
core/schemas/batch.go (11)
BatchStatus(5-5)BatchStatusValidating(8-8)BatchStatusInProgress(10-10)BatchStatusFinalizing(11-11)BatchStatusCompleted(12-12)BatchStatusFailed(9-9)BatchStatusCancelling(14-14)BatchStatusCancelled(15-15)BatchStatusExpired(13-13)BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/providers/gemini/types.go (11)
GeminiJobStatePending(1582-1582)GeminiJobStateRunning(1583-1583)GeminiJobStateSucceeded(1584-1584)GeminiJobStateFailed(1585-1585)GeminiJobStateCancelling(1586-1586)GeminiJobStateCancelled(1587-1587)GeminiBatchJobResponseSDK(1618-1624)GeminiBatchMetadata(1499-1512)GeminiBatchStats(1433-1437)GeminiBatchDest(1533-1536)GeminiBatchListResponseSDK(1627-1630)core/providers/gemini/gemini.go (1)
GeminiProvider(23-29)core/providers/utils/utils.go (3)
NewBifrostOperationError(449-460)SetExtraHeaders(178-208)MakeRequestWithContext(39-93)
core/providers/azure/batch.go (4)
core/providers/azure/azure.go (1)
AzureProvider(22-27)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/openrouter/batch.go (4)
core/providers/openrouter/openrouter.go (1)
OpenRouterProvider(18-23)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/schemas/provider.go (2)
core/schemas/bifrost.go (2)
BatchDeleteRequest(103-103)BifrostError(364-373)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)
core/providers/ollama/batch.go (5)
core/providers/ollama/ollama.go (1)
OllamaProvider(18-23)core/schemas/account.go (1)
Key(8-17)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
🔇 Additional comments (26)
core/providers/openrouter/batch.go (1)
35-38:BatchDeletestub correctly flags operation as unsupported.The new
BatchDeletemethod matches the provider interface and usesNewUnsupportedOperationErrorwithschemas.BatchDeleteRequestandprovider.GetProviderKey(), consistent with the other OpenRouter batch stubs.core/providers/azure/batch.go (1)
437-441: ConsistentBatchDeleteunsupported stub for Azure.The Azure
BatchDeletemethod cleanly returnsNewUnsupportedOperationErrorwithBatchDeleteRequestand the provider key, matching the pattern used for other unsupported batch operations across providers.core/providers/cerebras/batch.go (1)
35-38: CerebrasBatchDeletestub matches existing unsupported batch pattern.The method signature and use of
NewUnsupportedOperationErrorwithBatchDeleteRequestare consistent with the other Cerebras batch stubs; no issues from an API or behavior standpoint.core/schemas/batch.go (1)
265-281:BifrostBatchDeleteRequest/Responseschemas align with existing batch types.The new request/response structs follow the established pattern (Provider + BatchID + ExtraParams on the request, and core identifiers plus
ExtraFieldson the response). They should integrate cleanly with the Provider interface and existing batch plumbing.core/providers/cohere/batch.go (1)
35-38: CohereBatchDeletecorrectly exposed as unsupported.This stub cleanly satisfies the Provider interface and uses
NewUnsupportedOperationErrorwithBatchDeleteRequest, matching the other Cohere batch stubs.core/providers/ollama/batch.go (1)
35-38: OllamaBatchDeletestub is consistent with other unsupported operations.The method correctly returns
NewUnsupportedOperationErrorforBatchDeleteRequestand uses the provider key, in line with the rest of the Ollama batch stubs.core/providers/anthropic/batch.go (1)
783-786: AnthropicBatchDeletestub cleanly signals unsupported operation.The new
BatchDeletemethod conforms to the Provider interface and usesNewUnsupportedOperationErrorwithBatchDeleteRequestand the Anthropic provider key, matching the established pattern for unsupported batch operations.core/providers/mistral/batch.go (1)
35-38: BatchDelete stub matches existing unsupported batch operationsImplementation is consistent with other Mistral batch stubs and correctly uses
NewUnsupportedOperationErrorwithBatchDeleteRequestand provider key.core/schemas/provider.go (3)
182-183: AllowedRequests/IsOperationAllowed correctly extended for batch delete
BatchDeleteflag and theBatchDeleteRequestswitch case follow the existing pattern for other batch ops; this will work as expected with both provider and custom-provider configs (nilAllowedRequestsstill means “all allowed”).
231-232: Request-type mapping for BatchDelete is consistentRouting
BatchDeleteRequesttoar.BatchDeletemirrors other batch operations and keeps authorization logic centralized.
350-351: Provider interface extension for BatchDelete is coherentAdding
BatchDeletewith the same(ctx, key, request) (*Resp, *Err)signature keeps provider implementations uniform with the rest of the batch API.core/providers/elevenlabs/batch.go (1)
35-38: Elevenlabs BatchDelete stub is correctly wiredThe unsupported-op implementation for
BatchDeletematches the existing batch stubs and uses the properBatchDeleteRequestrequest type.core/providers/perplexity/batch.go (1)
35-38: Perplexity BatchDelete unsupported stub is consistent
BatchDeletecorrectly returnsNewUnsupportedOperationErrorwithBatchDeleteRequestand the provider key, aligning with other Perplexity batch methods.core/providers/sgl/batch.go (1)
35-38: SGL BatchDelete unsupported stub is correctly wired
BatchDeletefollows the established unsupported-op pattern and uses the properBatchDeleteRequestrequest type.transports/bifrost-http/integrations/router.go (3)
80-89: BatchRequest extended correctly for delete operationsAdding
DeleteRequest *schemas.BifrostBatchDeleteRequesttoBatchRequestgives the router a symmetric way to represent delete operations alongside create/list/retrieve/cancel/results.
159-162: BatchDeleteResponseConverter type fits existing converter patternThe new
BatchDeleteResponseConverterfollows the same(ctx, resp) (interface{}, error)pattern as the other batch converters, allowing integrations (e.g., GenAI) to customize delete responses when needed.
822-843: Batch delete routing matches other batch operationsThe new
BatchDeleteRequestcase mirrors the existing create/list/retrieve/cancel/results paths: it validatesDeleteRequest, callsBatchDeleteRequeston the Bifrost client, runsPostCallback, and then optionally passes throughBatchDeleteResponseConverter. Behavior on success/error is consistent with the rest ofhandleBatchRequest.core/bifrost.go (1)
1133-1209: BatchDeleteRequest is properly wired end-to-end across core, providers, and transportsThe method correctly mirrors batch retrieve/cancel/results patterns. Verification confirms:
- All 20+ providers have BatchDelete implementations (Gemini with full support, others returning unsupported error)
- HTTP transport router handles BatchDeleteRequest case and calls g.client.BatchDeleteRequest
- Response converter mechanism available via BatchDeleteResponseConverter
- Schema types properly defined (BifrostBatchDeleteRequest, BifrostBatchDeleteResponse, RequestType enum)
- Provider interface defines correct method signature
core/providers/gemini/batch.go (3)
846-868: LGTM: Status mapping is complete and correct.The function properly maps all Bifrost batch statuses to Gemini SDK job states with a sensible default.
974-984: LGTM: Simple and correct conversion.The function appropriately converts the cancel response with only the essential fields.
986-1043: LGTM: BatchDelete implementation follows established patterns.The function properly validates inputs, constructs URLs with correct prefix handling, uses context-aware HTTP requests, and handles both 200 and 204 success status codes. The implementation is consistent with other batch operations in this file.
transports/bifrost-http/integrations/genai.go (2)
663-778: LGTM: Parameter extraction functions are well-structured.These four helper functions consistently extract provider, model, and batch_id from headers and path parameters with appropriate defaults and error handling.
1169-1245: LGTM: File parameter extraction functions are consistent.These three helper functions consistently extract provider and file_id with appropriate defaults and error handling, following the same pattern as batch parameter extraction.
core/providers/gemini/types.go (3)
1578-1589: LGTM: Job state constants are complete.The SDK job state constants properly cover all states including the distinction between Queued and Pending, aligning with the Google GenAI SDK.
1594-1602: Type definition is appropriate; validation handled in transport layer.The
Src interface{}field correctly supports both string (file reference) and array (inline requests) as per the Google GenAI SDK. The validation for this field is handled in the transport layer, which has already been flagged in a separate review comment.
1656-1662: LGTM: GeminiFileUploadRequest changes support resumable uploads.The struct changes appropriately:
- Remove Provider dependency to decouple from schemas package
- Add MimeType for proper content type handling
- Add ResumableSessionID for session-based uploads
These changes align with the resumable upload implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (10)
core/providers/gemini/files.go (1)
466-467: Misleading comment: struct uses camelCase, not snake_case.The comment states "Uses snake_case field names to match Google's API format" but
GeminiFileResponseuses camelCase JSON tags ("name","displayName","mimeType", etc.). Consider updating the comment to accurately describe the JSON shape.core/providers/gemini/batch.go (1)
870-896: Fix pending count calculation to prevent negative values.The pending count calculation on lines 888-890 can produce negative values when
resp.RequestCounts.Totalis zero butCompletedis non-zero, which would result in invalid statistics.This issue was previously flagged and also affects lines 914-918 and 953-957.
transports/bifrost-http/integrations/router.go (2)
419-429: Callcancel()whenPreCallbackshort-circuits withErrResumableUploadInitWhen
config.PreCallbackreturnsErrResumableUploadInit, the handler returns without ever callingcancel()fromConvertToBifrostContext, leaving that derived context un-cancelled on this path. You should invokecancel()before returning to keep context lifecycle consistent and avoid leaks.if config.PreCallback != nil { if err := config.PreCallback(ctx, bifrostCtx, req); err != nil { // Check if this is a resumable upload init that was already handled if err == ErrResumableUploadInit { - // Response was already written by the PreCallback, just return - return + // Response was already written by the PreCallback; clean up context and return + cancel() + return } g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute pre-request callback: "+err.Error())) + cancel() return } }
864-895: Removefmt.Printfdebug logging from file handlersThere are still several
[DEBUG]fmt.Printfcalls in the file-upload path (request start, error/success logs, callbacks, and converter path) and one on conversion error. These bypass the structured logger and spam stdout in production.Please either remove them or replace with
g.logger.Debug(...)(or equivalent structured logging) carrying the same context (provider, purpose, filename, error, response ID).Also applies to: 1011-1013
transports/bifrost-http/integrations/genai.go (6)
51-67: Provide a way to stop the session cleanup goroutineThe
initfunction starts a ticker-driven goroutine that runs for the lifetime of the process with no shutdown signal. In environments that expect graceful shutdown, this goroutine cannot be stopped and may complicate clean exit or tests.Consider introducing a package-level stop channel or context to terminate the loop, and expose a small shutdown hook:
-var uploadSessions = sync.Map{} +var uploadSessions = sync.Map{} +var uploadSessionsCleanupStop = make(chan struct{}) func init() { go func() { ticker := time.NewTicker(10 * time.Minute) - for range ticker.C { + defer ticker.Stop() + for { + select { + case <-ticker.C: now := time.Now() uploadSessions.Range(func(key, value interface{}) bool { if session, ok := value.(*uploadSession); ok { if now.Sub(session.CreatedAt) > time.Hour { uploadSessions.Delete(key) } } return true }) + case <-uploadSessionsCleanupStop: + return + } } }() } + +// ShutdownGenAIUploads allows tests or the server shutdown path to stop the cleanup goroutine. +func ShutdownGenAIUploads() { + close(uploadSessionsCleanupStop) +}
780-865: Drop or convertfmt.Printfdebug logging to structured loggerThere are many
[DEBUG]fmt.Printfcalls across the Gemini file/resumable helpers (parseGeminiFileUploadRequest,parseGeminiResumableUpload,parseGeminiResumableUploadPhase2,setResumableUploadFinalStatus,extractGeminiResumableUploadParams,extractGeminiFileUploadParams). These will spam stdout and can leak request/response details in production.Recommend:
- Remove these debug prints entirely, or
- Replace them with the integration’s structured logger (e.g.,
logger.Debug("...", fields...)), and avoid marshalling full responses just for logging.Also applies to: 1013-1077, 1088-1109, 1125-1135
45-49: Handlecrypto/rand.Readerror ingenerateSessionID
generateSessionIDignores the error fromrand.Read, which can theoretically fail and yield low-entropy or zeroed IDs. For a session identifier this is undesirable.Consider handling the error explicitly and using a well-defined fallback:
func generateSessionID() string { bytes := make([]byte, 16) - rand.Read(bytes) - return hex.EncodeToString(bytes) + if _, err := rand.Read(bytes); err != nil { + // Extremely rare; fall back to a timestamp-based ID rather than silently degrading entropy. + return fmt.Sprintf("%d-%d", time.Now().UnixNano(), time.Now().UnixNano()) + } + return hex.EncodeToString(bytes) }Is `crypto/rand.Read` documented as potentially returning an error that should be handled, and what are the recommended patterns for handling failures when generating random IDs?
867-934: Fix binary corruption inparseGeminiMultipartRelated
parseGeminiMultipartRelatedconverts the entire body to a string (strings.Split(string(body), delimiter)) and later casts the content back to[]byte. This will corrupt arbitrary binary data (null bytes, non-UTF‑8 sequences) and break binary file uploads via multipart/related.Operate on the raw
[]byteinstead, usingbytesAPIs:+import "bytes" @@ - // Split body by boundary - delimiter := "--" + boundary - parts := strings.Split(string(body), delimiter) + // Split body by boundary without converting to string (preserve binary) + delimiter := []byte("--" + boundary) + parts := bytes.Split(body, delimiter) @@ - for _, part := range parts { - part = strings.TrimSpace(part) - if part == "" || part == "--" { + for _, part := range parts { + part = bytes.TrimSpace(part) + if len(part) == 0 || bytes.Equal(part, []byte("--")) { continue } @@ - headerEnd := strings.Index(part, "\r\n\r\n") - if headerEnd == -1 { - headerEnd = strings.Index(part, "\n\n") + headerEnd := bytes.Index(part, []byte("\r\n\r\n")) + if headerEnd == -1 { + headerEnd = bytes.Index(part, []byte("\n\n")) @@ - headers := part[:headerEnd] - content := part[headerEnd:] - content = strings.TrimPrefix(content, "\r\n\r\n") - content = strings.TrimPrefix(content, "\n\n") + headers := part[:headerEnd] + content := part[headerEnd:] + content = bytes.TrimPrefix(content, []byte("\r\n\r\n")) + content = bytes.TrimPrefix(content, []byte("\n\n")) @@ - headersLower := strings.ToLower(headers) + headersLower := strings.ToLower(string(headers)) @@ - if err := sonic.Unmarshal([]byte(content), &metadata); err == nil { + if err := sonic.Unmarshal(content, &metadata); err == nil { @@ - // This is file content - uploadReq.File = []byte(content) + // This is file content + uploadReq.File = append([]byte(nil), content...)
450-482: Handle unexpectedSrctypes in batch-create mappingIn the
CreateGenAIBatchRouteConfigscreate path, theswitchonsdkReq.Srchandlesstringand[]interface{}but silently ignores any other type, leaving bothInputFileIDandRequestsunset and deferring a confusing error to deeper layers.Add a default case that fails fast with a clear error:
switch src := sdkReq.Src.(type) { case string: // File-based input: src="files/display_name" bifrostReq.InputFileID = strings.TrimPrefix(src, "files/") case []interface{}: // Inline requests... // ... bifrostReq.Requests = requests + default: + return nil, fmt.Errorf("invalid src type: expected string or array, got %T", src) }
1115-1163: Honor reverse-proxy headers when constructing resumable upload URL
extractGeminiFileUploadParamsbuilds the resumable upload URL fromctx.IsTLS()andctx.Host(). Behind a reverse proxy/ingress, this will point clients at the internal address instead of the public one.Align this with existing patterns (e.g., in
sessionhandlers) by:
- Checking
X-Forwarded-Protofirst to determine scheme (httpsvshttp).- Checking
X-Forwarded-Host(or similar) for the external host and falling back toctx.Host()only if missing.Example sketch:
- scheme := "http" - if ctx.IsTLS() { - scheme = "https" - } - host := string(ctx.Host()) + scheme := "http" + if xfProto := string(ctx.Request.Header.Peek("X-Forwarded-Proto")); strings.Contains(strings.ToLower(xfProto), "https") { + scheme = "https" + } else if ctx.IsTLS() { + scheme = "https" + } + host := string(ctx.Request.Header.Peek("X-Forwarded-Host")) + if host == "" { + host = string(ctx.Host()) + }
🧹 Nitpick comments (1)
transports/bifrost-http/integrations/genai.go (1)
943-982: Consider safer file reading pattern in multipart upload helper (optional)In
parseGeminiMultipartUpload, the file read uses a singlefile.Read(fileData)into a pre-sized buffer. For large files or certain readers this may return a short read witherr == io.EOF, which you currently treat as an error.It would be more robust to use
io.ReadAllorio.ReadFull:- fileData := make([]byte, fileHeader.Size) - if _, err := file.Read(fileData); err != nil { - return err - } + fileData, err := io.ReadAll(file) + if err != nil { + return err + }This is not strictly new behavior but becomes more important now that uploads are a key Gemini path.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (24)
core/bifrost.go(1 hunks)core/providers/anthropic/batch.go(1 hunks)core/providers/azure/batch.go(1 hunks)core/providers/bedrock/batch.go(1 hunks)core/providers/cerebras/batch.go(1 hunks)core/providers/cohere/batch.go(1 hunks)core/providers/elevenlabs/batch.go(1 hunks)core/providers/gemini/batch.go(1 hunks)core/providers/gemini/files.go(1 hunks)core/providers/gemini/types.go(1 hunks)core/providers/groq/batch.go(1 hunks)core/providers/mistral/batch.go(1 hunks)core/providers/ollama/batch.go(1 hunks)core/providers/openrouter/batch.go(1 hunks)core/providers/parasail/batch.go(1 hunks)core/providers/perplexity/batch.go(1 hunks)core/providers/sgl/batch.go(1 hunks)core/providers/vertex/batch.go(1 hunks)core/schemas/batch.go(1 hunks)core/schemas/bifrost.go(1 hunks)core/schemas/provider.go(3 hunks)transports/bifrost-http/integrations/genai.go(14 hunks)transports/bifrost-http/integrations/router.go(9 hunks)transports/bifrost-http/integrations/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (10)
- core/providers/azure/batch.go
- core/schemas/provider.go
- core/providers/parasail/batch.go
- core/schemas/bifrost.go
- core/providers/openrouter/batch.go
- core/providers/bedrock/batch.go
- core/providers/cohere/batch.go
- core/providers/mistral/batch.go
- transports/bifrost-http/integrations/utils.go
- core/providers/perplexity/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/bifrost.gocore/schemas/batch.gocore/providers/elevenlabs/batch.gocore/providers/vertex/batch.gocore/providers/sgl/batch.gocore/providers/groq/batch.gocore/providers/cerebras/batch.gocore/providers/gemini/batch.gocore/providers/anthropic/batch.gotransports/bifrost-http/integrations/router.gotransports/bifrost-http/integrations/genai.gocore/providers/gemini/files.gocore/providers/ollama/batch.gocore/providers/gemini/types.go
🧬 Code graph analysis (12)
core/bifrost.go (4)
core/schemas/bifrost.go (5)
BatchDeleteRequest(103-103)BifrostError(364-373)ErrorField(382-389)BifrostErrorExtraFields(431-435)RequestType(83-83)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/provider.go (2)
Provider(313-362)CustomProviderConfig(248-254)ui/lib/types/config.ts (2)
CustomProviderConfig(153-158)RequestType(122-134)
core/schemas/batch.go (2)
core/schemas/provider.go (1)
Provider(313-362)core/schemas/bifrost.go (2)
ModelProvider(32-32)BifrostResponseExtraFields(295-304)
core/providers/elevenlabs/batch.go (4)
core/providers/elevenlabs/elevenlabs.go (1)
ElevenlabsProvider(23-29)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/vertex/batch.go (4)
core/providers/vertex/vertex.go (1)
VertexProvider(57-62)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/groq/batch.go (4)
core/providers/groq/groq.go (1)
GroqProvider(16-21)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/cerebras/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/gemini/batch.go (2)
core/schemas/batch.go (15)
BatchStatus(5-5)BatchStatusValidating(8-8)BatchStatusInProgress(10-10)BatchStatusFinalizing(11-11)BatchStatusCompleted(12-12)BatchStatusFailed(9-9)BatchStatusCancelling(14-14)BatchStatusCancelled(15-15)BatchStatusExpired(13-13)BifrostBatchCreateResponse(85-109)BifrostBatchRetrieveResponse(152-187)BifrostBatchListResponse(129-140)BifrostBatchCancelResponse(199-208)BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/providers/gemini/types.go (11)
GeminiJobStatePending(1582-1582)GeminiJobStateRunning(1583-1583)GeminiJobStateSucceeded(1584-1584)GeminiJobStateFailed(1585-1585)GeminiJobStateCancelling(1586-1586)GeminiJobStateCancelled(1587-1587)GeminiBatchJobResponseSDK(1618-1624)GeminiBatchMetadata(1499-1512)GeminiBatchStats(1433-1437)GeminiBatchDest(1533-1536)GeminiBatchListResponseSDK(1627-1630)
core/providers/anthropic/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
transports/bifrost-http/integrations/router.go (4)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)transports/bifrost-http/integrations/genai.go (1)
ErrResumableUploadInit(36-36)core/schemas/bifrost.go (2)
BatchDeleteRequest(103-103)FileUploadRequest(104-104)core/schemas/provider.go (1)
Provider(313-362)
core/providers/gemini/files.go (1)
core/schemas/files.go (1)
BifrostFileUploadResponse(89-105)
core/providers/ollama/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/gemini/types.go (1)
core/schemas/models.go (1)
Model(109-129)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
🔇 Additional comments (16)
core/providers/groq/batch.go (1)
35-38: LGTM!The
BatchDeletemethod follows the established pattern for unsupported batch operations in this provider, consistent with the otherBatch*methods. The signature and error handling align with the newBifrostBatchDeleteRequest/BifrostBatchDeleteResponsetypes from the schema.core/providers/vertex/batch.go (1)
35-38: LGTM!The
BatchDeletestub follows the same unsupported-operation pattern as other batch methods in this file, maintaining consistency across the Vertex AI provider implementation.core/providers/elevenlabs/batch.go (1)
35-38: LGTM!The
BatchDeletemethod correctly implements the unsupported-operation stub, consistent with the other batch methods for the Elevenlabs provider.core/providers/sgl/batch.go (1)
35-38: LGTM!The
BatchDeletemethod follows the same unsupported-operation pattern, maintaining interface compliance for the SGL provider.core/providers/gemini/files.go (1)
468-485: Wrapper structure and conversion logic are correct.The
GeminiFileUploadResponseWrapperproperly wraps the file response under a"file"key, andToGeminiFileUploadResponsecorrectly maps all theBifrostFileUploadResponsefields to the Gemini format. The hardcodedMimeTypeof"application/octet-stream"is appropriate sinceBifrostFileUploadResponsedoes not include MIME type metadata.core/providers/gemini/batch.go (1)
986-1043: LGTM! BatchDelete implementation is well-structured.The BatchDelete implementation correctly:
- Validates the operation is allowed and BatchID is present
- Handles both prefixed and non-prefixed batch IDs
- Makes the DELETE request with proper headers
- Accepts both 200 OK and 204 No Content responses
- Returns a properly structured response with latency tracking
core/providers/gemini/types.go (3)
1578-1589: LGTM! Job state constants are well-defined.The new Gemini SDK job state constants provide clear mappings for batch job lifecycle states and align with Google's GenAI SDK conventions.
1594-1651: LGTM! SDK batch types follow consistent patterns.The SDK batch types are well-structured with proper JSON tags and documentation. The
Srcfield asinterface{}inGeminiBatchCreateRequestSDKis appropriate for the type definition layer, with validation expected in the transport layer.
1656-1662: No breaking change to GeminiFileUploadRequest signature — review comment is incorrect.The
GeminiFileUploadRequeststruct has never contained aProviderfield. Provider information is tracked separately through theuploadSessionstruct (internal session management) and context values, not as part of the request struct itself. The addition ofMimeTypeandResumableSessionIDfields is correct and properly integrated across all parsing functions.However, there are legitimate concerns:
- No tests exist for
GeminiFileUploadRequestfunctionality- No migration notes or documentation provided for the resumable upload changes
Likely an incorrect or invalid review comment.
core/providers/cerebras/batch.go (1)
35-38: LGTM! BatchDelete follows the established pattern.The BatchDelete implementation correctly signals an unsupported operation and is consistent with the other batch method stubs in this provider.
core/schemas/batch.go (1)
265-281: LGTM! Batch delete types are well-structured.The new
BifrostBatchDeleteRequestandBifrostBatchDeleteResponsetypes follow established patterns from other batch operations, with appropriate fields and JSON tags. TheDeletedboolean field in the response provides clear confirmation of the operation outcome.core/providers/ollama/batch.go (1)
35-38: LGTM! BatchDelete follows the established pattern.The BatchDelete implementation correctly signals an unsupported operation and is consistent with the other batch method stubs in this provider.
core/providers/anthropic/batch.go (1)
793-796: LGTM! BatchDelete follows the established pattern.The BatchDelete implementation correctly signals an unsupported operation and is consistent with the other batch method stubs across providers.
core/bifrost.go (1)
1133-1209: BatchDeleteRequest wiring is consistent with existing batch APIsValidation, config lookup, key selection, and
executeRequestWithRetriesusage mirror the other batch methods (BatchRetrieveRequest,BatchCancelRequest,BatchResultsRequest), includingExtraFieldspopulation. No issues from a core-flow perspective.transports/bifrost-http/integrations/router.go (1)
711-843: BatchDelete batch handler matches other batch operationsThe new
schemas.BatchDeleteRequestbranch correctly validatesDeleteRequest, callsBatchDeleteRequest, runsPostCallback, and then appliesBatchDeleteResponseConverteror returns the raw Bifrost response. This mirrors the other batch cases and keeps the abstraction consistent.transports/bifrost-http/integrations/genai.go (1)
1195-1245: File retrieve/delete extractors look correct and consistent
extractGeminiFileRetrieveParamsandextractGeminiFileDeleteParamsvalidatefile_id, default the provider from header, and populate the Gemini SDK requests. This matches the patterns used by other extractors and keeps provider scoping explicit.
| // uploadSessions stores active upload sessions keyed by session ID | ||
| var uploadSessions = sync.Map{} | ||
|
|
||
| // ErrResumableUploadInit is a sentinel error indicating the resumable upload init response was sent | ||
| var ErrResumableUploadInit = errors.New("resumable upload init handled") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid mutating uploadSession structs after storing them in sync.Map
uploadSessions stores *uploadSession values in a sync.Map, and later code mutates the pointed-to struct (e.g., setting session.Provider in extractGeminiFileUploadParams) while the cleanup goroutine concurrently reads from the same objects. This pattern can introduce data races even though the map itself is concurrency-safe.
To avoid races, treat uploadSession values as immutable after storing them:
- When you need to set
Provider, load the session, copy it, set the field on the copy, andStorethe new pointer back intouploadSessionsinstead of mutating the existing struct. - Likewise, keep cleanup read-only on the value and only call
Deleteon the map.
Example for provider update:
- if sessionVal, ok := uploadSessions.Load(uploadReq.ResumableSessionID); ok {
- if session, ok := sessionVal.(*uploadSession); ok {
- session.Provider = schemas.ModelProvider(provider)
- }
- }
+ if sessionVal, ok := uploadSessions.Load(uploadReq.ResumableSessionID); ok {
+ if session, ok := sessionVal.(*uploadSession); ok {
+ updated := *session
+ updated.Provider = schemas.ModelProvider(provider)
+ uploadSessions.Store(uploadReq.ResumableSessionID, &updated)
+ }
+ }Also applies to: 843-850, 1094-1106
| // CreateGenAIBatchRouteConfigs creates route configurations for Gemini Batch API endpoints. | ||
| func CreateGenAIBatchRouteConfigs(pathPrefix string, handlerStore lib.HandlerStore) []RouteConfig { | ||
| var routes []RouteConfig | ||
|
|
||
| // Create batch endpoint - POST /v1beta/models/{model}:batchGenerateContent | ||
| routes = append(routes, RouteConfig{ | ||
| Type: RouteConfigTypeGenAI, | ||
| Path: pathPrefix + "/v1beta/models/{model}:batchGenerateContent", | ||
| Method: "POST", | ||
| GetRequestTypeInstance: func() interface{} { | ||
| return &gemini.GeminiBatchCreateRequestSDK{} | ||
| }, | ||
| BatchCreateRequestConverter: func(ctx *context.Context, req interface{}) (*BatchRequest, error) { | ||
| if sdkReq, ok := req.(*gemini.GeminiBatchCreateRequestSDK); ok { | ||
| // Get provider from context | ||
| provider := schemas.Gemini | ||
| if p := (*ctx).Value(bifrostContextKeyProvider); p != nil { | ||
| provider = p.(schemas.ModelProvider) | ||
| } | ||
|
|
||
| bifrostReq := &schemas.BifrostBatchCreateRequest{ | ||
| Provider: provider, | ||
| Model: sdkReq.Model, | ||
| } | ||
|
|
||
| // Handle src field - can be string (file reference) or array (inline requests) | ||
| switch src := sdkReq.Src.(type) { | ||
| case string: | ||
| // File-based input: src="files/display_name" | ||
| // TrimPrefix is safe even if prefix doesn't exist | ||
| bifrostReq.InputFileID = strings.TrimPrefix(src, "files/") | ||
| case []interface{}: | ||
| // Inline requests: src=[{contents: [...], config: {...}}] | ||
| requests := make([]schemas.BatchRequestItem, 0, len(src)) | ||
| for i, item := range src { | ||
| if itemMap, ok := item.(map[string]interface{}); ok { | ||
| customID := fmt.Sprintf("request-%d", i) | ||
| requests = append(requests, schemas.BatchRequestItem{ | ||
| CustomID: customID, | ||
| Body: itemMap, | ||
| }) | ||
| } | ||
| } | ||
| bifrostReq.Requests = requests | ||
| } | ||
|
|
||
| return &BatchRequest{ | ||
| Type: schemas.BatchCreateRequest, | ||
| CreateRequest: bifrostReq, | ||
| }, nil | ||
| } | ||
| return nil, errors.New("invalid batch create request type") | ||
| }, | ||
| BatchCreateResponseConverter: func(ctx *context.Context, resp *schemas.BifrostBatchCreateResponse) (interface{}, error) { | ||
| if resp.ExtraFields.RawResponse != nil { | ||
| return resp.ExtraFields.RawResponse, nil | ||
| } | ||
| return gemini.ToGeminiBatchJobResponse(resp), nil | ||
| }, | ||
| ErrorConverter: func(ctx *context.Context, err *schemas.BifrostError) interface{} { | ||
| return gemini.ToGeminiError(err) | ||
| }, | ||
| PreCallback: extractGeminiBatchCreateParams, | ||
| }) | ||
|
|
||
| // List batches endpoint - GET /v1beta/batches | ||
| routes = append(routes, RouteConfig{ | ||
| Type: RouteConfigTypeGenAI, | ||
| Path: pathPrefix + "/v1beta/batches", | ||
| Method: "GET", | ||
| GetRequestTypeInstance: func() interface{} { | ||
| return &gemini.GeminiBatchListRequestSDK{} | ||
| }, | ||
| BatchCreateRequestConverter: func(ctx *context.Context, req interface{}) (*BatchRequest, error) { | ||
| if sdkReq, ok := req.(*gemini.GeminiBatchListRequestSDK); ok { | ||
| // Get provider from context | ||
| provider := schemas.Gemini | ||
| if p := (*ctx).Value(bifrostContextKeyProvider); p != nil { | ||
| provider = p.(schemas.ModelProvider) | ||
| } | ||
|
|
||
| bifrostReq := &schemas.BifrostBatchListRequest{ | ||
| Provider: provider, | ||
| PageSize: sdkReq.PageSize, | ||
| } | ||
| if sdkReq.PageToken != "" { | ||
| bifrostReq.PageToken = &sdkReq.PageToken | ||
| } | ||
| return &BatchRequest{ | ||
| Type: schemas.BatchListRequest, | ||
| ListRequest: bifrostReq, | ||
| }, nil | ||
| } | ||
| return nil, errors.New("invalid batch list request type") | ||
| }, | ||
| BatchListResponseConverter: func(ctx *context.Context, resp *schemas.BifrostBatchListResponse) (interface{}, error) { | ||
| if resp.ExtraFields.RawResponse != nil { | ||
| return resp.ExtraFields.RawResponse, nil | ||
| } | ||
| return gemini.ToGeminiBatchListResponse(resp), nil | ||
| }, | ||
| ErrorConverter: func(ctx *context.Context, err *schemas.BifrostError) interface{} { | ||
| return gemini.ToGeminiError(err) | ||
| }, | ||
| PreCallback: extractGeminiBatchListQueryParams, | ||
| }) | ||
|
|
||
| // Retrieve batch endpoint - GET /v1beta/batches/{batch_id} | ||
| routes = append(routes, RouteConfig{ | ||
| Type: RouteConfigTypeGenAI, | ||
| Path: pathPrefix + "/v1beta/batches/{batch_id}", | ||
| Method: "GET", | ||
| GetRequestTypeInstance: func() interface{} { | ||
| return &gemini.GeminiBatchRetrieveRequestSDK{} | ||
| }, | ||
| BatchCreateRequestConverter: func(ctx *context.Context, req interface{}) (*BatchRequest, error) { | ||
| if sdkReq, ok := req.(*gemini.GeminiBatchRetrieveRequestSDK); ok { | ||
| // Get provider from context | ||
| provider := schemas.Gemini | ||
| if p := (*ctx).Value(bifrostContextKeyProvider); p != nil { | ||
| provider = p.(schemas.ModelProvider) | ||
| } | ||
|
|
||
| return &BatchRequest{ | ||
| Type: schemas.BatchRetrieveRequest, | ||
| RetrieveRequest: &schemas.BifrostBatchRetrieveRequest{ | ||
| Provider: provider, | ||
| BatchID: sdkReq.Name, | ||
| }, | ||
| }, nil | ||
| } | ||
| return nil, errors.New("invalid batch retrieve request type") | ||
| }, | ||
| BatchRetrieveResponseConverter: func(ctx *context.Context, resp *schemas.BifrostBatchRetrieveResponse) (interface{}, error) { | ||
| if resp.ExtraFields.RawResponse != nil { | ||
| return resp.ExtraFields.RawResponse, nil | ||
| } | ||
| return gemini.ToGeminiBatchRetrieveResponse(resp), nil | ||
| }, | ||
| ErrorConverter: func(ctx *context.Context, err *schemas.BifrostError) interface{} { | ||
| return gemini.ToGeminiError(err) | ||
| }, | ||
| PreCallback: extractGeminiBatchIDFromPath, | ||
| }) | ||
|
|
||
| // Cancel batch endpoint - POST /v1beta/batches/{batch_id}:cancel | ||
| routes = append(routes, RouteConfig{ | ||
| Type: RouteConfigTypeGenAI, | ||
| Path: pathPrefix + "/v1beta/batches/{batch_id}:cancel", | ||
| Method: "POST", | ||
| GetRequestTypeInstance: func() interface{} { | ||
| return &gemini.GeminiBatchCancelRequestSDK{} | ||
| }, | ||
| BatchCreateRequestConverter: func(ctx *context.Context, req interface{}) (*BatchRequest, error) { | ||
| if sdkReq, ok := req.(*gemini.GeminiBatchCancelRequestSDK); ok { | ||
| // Get provider from context | ||
| provider := schemas.Gemini | ||
| if p := (*ctx).Value(bifrostContextKeyProvider); p != nil { | ||
| provider = p.(schemas.ModelProvider) | ||
| } | ||
|
|
||
| return &BatchRequest{ | ||
| Type: schemas.BatchCancelRequest, | ||
| CancelRequest: &schemas.BifrostBatchCancelRequest{ | ||
| Provider: provider, | ||
| BatchID: sdkReq.Name, | ||
| }, | ||
| }, nil | ||
| } | ||
| return nil, errors.New("invalid batch cancel request type") | ||
| }, | ||
| BatchCancelResponseConverter: func(ctx *context.Context, resp *schemas.BifrostBatchCancelResponse) (interface{}, error) { | ||
| if resp.ExtraFields.RawResponse != nil { | ||
| return resp.ExtraFields.RawResponse, nil | ||
| } | ||
| return gemini.ToGeminiBatchCancelResponse(resp), nil | ||
| }, | ||
| ErrorConverter: func(ctx *context.Context, err *schemas.BifrostError) interface{} { | ||
| return gemini.ToGeminiError(err) | ||
| }, | ||
| PreCallback: extractGeminiBatchIDFromPathCancel, | ||
| }) | ||
|
|
||
| // Delete batch endpoint - DELETE /v1beta/batches/{batch_id} | ||
| routes = append(routes, RouteConfig{ | ||
| Type: RouteConfigTypeGenAI, | ||
| Path: pathPrefix + "/v1beta/batches/{batch_id}", | ||
| Method: "DELETE", | ||
| GetRequestTypeInstance: func() interface{} { | ||
| return &gemini.GeminiBatchDeleteRequestSDK{} | ||
| }, | ||
| BatchCreateRequestConverter: func(ctx *context.Context, req interface{}) (*BatchRequest, error) { | ||
| if sdkReq, ok := req.(*gemini.GeminiBatchDeleteRequestSDK); ok { | ||
| // Get provider from context | ||
| provider := schemas.Gemini | ||
| if p := (*ctx).Value(bifrostContextKeyProvider); p != nil { | ||
| provider = p.(schemas.ModelProvider) | ||
| } | ||
|
|
||
| return &BatchRequest{ | ||
| Type: schemas.BatchDeleteRequest, | ||
| DeleteRequest: &schemas.BifrostBatchDeleteRequest{ | ||
| Provider: provider, | ||
| BatchID: sdkReq.Name, | ||
| }, | ||
| }, nil | ||
| } | ||
| return nil, errors.New("invalid batch delete request type") | ||
| }, | ||
| BatchDeleteResponseConverter: func(ctx *context.Context, resp *schemas.BifrostBatchDeleteResponse) (interface{}, error) { | ||
| if resp.ExtraFields.RawResponse != nil { | ||
| return resp.ExtraFields.RawResponse, nil | ||
| } | ||
| // Return empty object on successful delete | ||
| return map[string]interface{}{}, nil | ||
| }, | ||
| ErrorConverter: func(ctx *context.Context, err *schemas.BifrostError) interface{} { | ||
| return gemini.ToGeminiError(err) | ||
| }, | ||
| PreCallback: extractGeminiBatchIDFromPath, | ||
| }) | ||
|
|
||
| return routes | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find CreateGenAIRouteConfigs to see the generic route definition
rg -n "CreateGenAIRouteConfigs" transports/bifrost-http/integrations --type go -A 50 | head -100Repository: maximhq/bifrost
Length of output: 8912
🏁 Script executed:
#!/bin/bash
# Find where routes are registered and understand the order
rg -n "CreateGenAIBatchRouteConfigs|CreateGenAIRouteConfigs" transports/bifrost-http/integrations --type go -B 5Repository: maximhq/bifrost
Length of output: 3930
🏁 Script executed:
#!/bin/bash
# Check the router library being used
rg -n "router\.|Router|fasthttp" transports/bifrost-http --type go | grep -i "import\|package\|new" | head -20Repository: maximhq/bifrost
Length of output: 2797
🏁 Script executed:
#!/bin/bash
# Look at lines 1249-1255 mentioned in the comment
sed -n '1249,1255p' transports/bifrost-http/integrations/genai.goRepository: maximhq/bifrost
Length of output: 365
🏁 Script executed:
#!/bin/bash
# Check if there are existing tests for route matching
fd -e test.go -e _test.go transports/bifrost-http/integrations | xargs grep -l "batch.*route\|route.*batch" 2>/dev/nullRepository: maximhq/bifrost
Length of output: 510
🏁 Script executed:
#!/bin/bash
# Find NewGenericRouter to understand route registration
rg -n "func NewGenericRouter" transports/bifrost-http --type go -A 30Repository: maximhq/bifrost
Length of output: 2939
🏁 Script executed:
#!/bin/bash
# Look for router.Handle or similar route registration calls
rg -n "HandleFunc|Handle\(|router\.Handle" transports/bifrost-http --type go -B 2 -A 2 | head -50Repository: maximhq/bifrost
Length of output: 41
🏁 Script executed:
#!/bin/bash
# Search for fasthttp router import and usage
rg -n "router\." transports/bifrost-http/integrations --type go | grep -i "handle\|post\|get" | head -20Repository: maximhq/bifrost
Length of output: 41
🏁 Script executed:
#!/bin/bash
# Check if there are any existing tests for batch route
fd . transports/bifrost-http --full-path -type f -name "*test.go" | xargs grep -l "batch.*route\|CreateGenAIBatch" 2>/dev/nullRepository: maximhq/bifrost
Length of output: 231
🏁 Script executed:
#!/bin/bash
# Look at the complete RouterConfig or route registration mechanism
rg -n "type RouteConfig struct" transports/bifrost-http --type go -A 30Repository: maximhq/bifrost
Length of output: 5697
🏁 Script executed:
#!/bin/bash
# See more of RegisterRoutes to find where router.HandleFunc is called
rg -n "RegisterRoutes" transports/bifrost-http/integrations/router.go -A 100 | head -120Repository: maximhq/bifrost
Length of output: 8386
🏁 Script executed:
#!/bin/bash
# Look for HandleFunc or router method calls
rg -n "r\.Handle|r\.POST|r\.GET" transports/bifrost-http/integrations/router.goRepository: maximhq/bifrost
Length of output: 295
🏁 Script executed:
#!/bin/bash
# Check how the fasthttp router library works - look at imports
rg -n "import" transports/bifrost-http/integrations/router.go -A 20 | head -40Repository: maximhq/bifrost
Length of output: 676
🏁 Script executed:
#!/bin/bash
# Search for any test files related to genai batch routing
find transports/bifrost-http -name "*.go" -type f | xargs grep -l "batch.*generate.*content\|batchGenerateContent" 2>/dev/nullRepository: maximhq/bifrost
Length of output: 104
🏁 Script executed:
#!/bin/bash
# Look at the complete batch create route definition to confirm the exact path
sed -n '438,500p' transports/bifrost-http/integrations/genai.go | grep -A 2 "Path.*:"Repository: maximhq/bifrost
Length of output: 193
🏁 Script executed:
#!/bin/bash
# Check if there are any tests that verify batch endpoint routing
find transports/bifrost-http -name "*.go" -type f | xargs grep -l "GenAIRouter\|batchGenerateContent" 2>/dev/null | head -5Repository: maximhq/bifrost
Length of output: 153
🏁 Script executed:
#!/bin/bash
# Look for fasthttp/router documentation or examples in the codebase
rg -n "router\." transports/bifrost-http --type go | grep -i "parameter\|match\|pattern" | head -10Repository: maximhq/bifrost
Length of output: 41
🏁 Script executed:
#!/bin/bash
# Check the full path construction in batch routes to understand the routing pattern
rg -n "Path:.*pathPrefix" transports/bifrost-http/integrations/genai.go | grep -A 1 "batch"Repository: maximhq/bifrost
Length of output: 356
🏁 Script executed:
#!/bin/bash
# Check if there's any route reordering or priority logic in RegisterRoutes
rg -n "RegisterRoutes" transports/bifrost-http/integrations/router.go -A 150 | grep -E "sort|order|priority|precedence|route" | head -20Repository: maximhq/bifrost
Length of output: 1548
🏁 Script executed:
#!/bin/bash
# Look at the full RegisterRoutes implementation to understand the complete flow
sed -n '326,380p' transports/bifrost-http/integrations/router.goRepository: maximhq/bifrost
Length of output: 2357
🏁 Script executed:
#!/bin/bash
# Check test files to see if batch routing is tested
cat transports/bifrost-http/handlers/integrations.go | head -50Repository: maximhq/bifrost
Length of output: 1725
🏁 Script executed:
#!/bin/bash
# Search for tests that verify batch route handling
find transports/bifrost-http -name "*test*.go" -type f | xargs grep -l "batch\|Batch" 2>/dev/nullRepository: maximhq/bifrost
Length of output: 43
🏁 Script executed:
#!/bin/bash
# Check if there's a stack of PRs or if this is a standalone change
git log --oneline -20 2>/dev/null | head -20Repository: maximhq/bifrost
Length of output: 91
Route registration order causes batch requests to match generic catch-all route
The batch endpoint routes will be incorrectly handled by the generic inference route. The generic route at /v1beta/models/{model:*} is registered before the batch-specific route at /v1beta/models/{model}:batchGenerateContent. Since fasthttp/router uses first-match-wins semantics, requests like POST /genai/v1beta/models/gemini-1.5-pro:batchGenerateContent will match the generic catch-all pattern (with model="gemini-1.5-pro:batchGenerateContent") and be routed to the generic handler instead of the batch handler.
Fix: Reorder route registration in NewGenAIRouter (genai.go:1249-1251) so batch routes are appended before generic routes:
routes := CreateGenAIBatchRouteConfigs("/genai", handlerStore)
routes = append(routes, CreateGenAIFileRouteConfigs("/genai", handlerStore)...)
routes = append(routes, CreateGenAIRouteConfigs("/genai")...)🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/genai.go around lines 438 to 661 and
specifically adjust NewGenAIRouter at genai.go:1249-1251, the batch-specific
routes are registered after the generic catch-all model route causing POST
/v1beta/models/{model}:batchGenerateContent to be captured by the generic route;
fix by changing the route registration order in NewGenAIRouter so
CreateGenAIBatchRouteConfigs("/genai", handlerStore) is appended before the
generic CreateGenAIRouteConfigs("/genai") (i.e., append batch routes first, then
file routes, then generic routes) ensuring first-match-wins routes the batch
endpoints correctly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (12)
core/providers/gemini/files.go (1)
466-485: Fix misleading comment and optionally guard nil inputThe comment still says “Uses snake_case field names to match Google's API format”, but
GeminiFileResponseuses camelCase JSON tags and this helper mainly wraps the file in a"file"envelope. Consider updating it to something like “Wraps the Gemini file response in afileobject to match the Gemini Files upload response shape.”You may also want to treat a nil
*BifrostFileUploadResponsedefensively (returnnilearly) to avoid accidental panics if a converter is ever called on a nil response.transports/bifrost-http/integrations/router.go (3)
423-428: Callcancel()before returning onErrResumableUploadInitWhen
PreCallbackreturnsErrResumableUploadInit, you return without ever calling thecancelfunction fromConvertToBifrostContext, leaving that cancellable context alive longer than necessary. Even though no upstream Bifrost call is made on this path, it’s more correct and consistent with other branches to cancel before returning.Consider:
if config.PreCallback != nil { if err := config.PreCallback(ctx, bifrostCtx, req); err != nil { // Check if this is a resumable upload init that was already handled if err == ErrResumableUploadInit { - // Response was already written by the PreCallback, just return - return + // Response was already written by the PreCallback; clean up context and return + cancel() + return } g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute pre-request callback: "+err.Error())) return } }
870-895: Removefmt.Printfdebug logging from file upload pathThese debug prints (
fmt.Printf) bypass the structured logger and will spam stdout in production:
- Before
FileUploadRequest- On
FileUploadRequesterror/success- Around
PostCallback- Around
FileUploadResponseConverterEither drop them or switch to
g.logger.Debug(...)with the same context. For example:- fmt.Printf("[DEBUG] router: calling FileUploadRequest for provider=%s, purpose=%s, filename=%s\n", fileReq.UploadRequest.Provider, fileReq.UploadRequest.Purpose, fileReq.UploadRequest.Filename) + g.logger.Debug("router: FileUploadRequest provider=%s purpose=%s filename=%s", fileReq.UploadRequest.Provider, fileReq.UploadRequest.Purpose, fileReq.UploadRequest.Filename) ... - fmt.Printf("[DEBUG] router: FileUploadRequest error: %s (provider=%s)\n", errMsg, fileReq.UploadRequest.Provider) + g.logger.Debug("router: FileUploadRequest error: %s (provider=%s)", errMsg, fileReq.UploadRequest.Provider) ... - fmt.Printf("[DEBUG] router: FileUploadRequest success, response ID=%s\n", fileResponse.ID) + g.logger.Debug("router: FileUploadRequest success, response ID=%s", fileResponse.ID) ... - fmt.Printf("[DEBUG] router: calling PostCallback\n") + g.logger.Debug("router: calling PostCallback") ... - fmt.Printf("[DEBUG] router: PostCallback error: %v\n", err) + g.logger.Debug("router: PostCallback error: %v", err) ... - fmt.Printf("[DEBUG] router: PostCallback success\n") + g.logger.Debug("router: PostCallback success") ... - fmt.Printf("[DEBUG] router: calling FileUploadResponseConverter\n") + g.logger.Debug("router: calling FileUploadResponseConverter") ... - fmt.Printf("[DEBUG] router: FileUploadResponseConverter done, err=%v\n", err) + g.logger.Debug("router: FileUploadResponseConverter done, err=%v", err)
1011-1013: Remove leftover debug print on file response conversion errorThis
fmt.Printf("[DEBUG] router: file response conversion error: %v\n", err)should also be removed or switched to structured logging (g.logger.Debug(...)) to avoid ad-hoc stdout logging in production.- if err != nil { - fmt.Printf("[DEBUG] router: file response conversion error: %v\n", err) - g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to convert file response")) - return - } + if err != nil { + g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to convert file response")) + return + }transports/bifrost-http/integrations/genai.go (7)
52-67: Cleanup goroutine runs forever; consider cancellable shutdownThe
init-spawned cleanup goroutine loops over a ticker indefinitely and never stops or callsticker.Stop(). This is fine for simple daemons but complicates graceful shutdown and test teardown.A cancellable stop channel or context (plus
defer ticker.Stop()) would give the host application a way to terminate this goroutine cleanly when shutting down the server.
780-934: Multipart/related parsing should avoid string round-trips and TrimSpace on binary content
parseGeminiMultipartRelatedcurrently converts the whole body to astringand usesstrings.Split,TrimSpace, and substring indexes to extract headers and content, then converts back to[]bytefor the file:parts := strings.Split(string(body), delimiter) ... content := part[headerEnd:] content = strings.TrimPrefix(content, "\r\n\r\n") ... uploadReq.File = []byte(content)While Go's
[]byte↔stringconversions preserve bytes, this approach:
- Copies the entire body, which is costly for large uploads.
- Uses
TrimSpaceover the whole part, which can accidentally strip leading/trailing bytes that are considered whitespace from binary payloads.- Reimplements multipart parsing logic that the standard library already handles robustly.
It would be safer and more efficient to either:
- Use
mime/multipartto parse themultipart/relatedbody; or- Operate directly on
[]bytewithbytes.Split/bytes.Index, avoiding full-body string allocation andTrimSpaceon binary segments.Refactoring this will make large file uploads more robust and memory-friendly.
792-861: Remove or downgrade debugfmt.Printf/full-response logging in upload pathsThere are many
[DEBUG]fmt.Printfcalls and asonic.Marshalof the full response in:
parseGeminiFileUploadRequestparseGeminiResumableUploadparseGeminiResumableUploadPhase2setResumableUploadFinalStatusextractGeminiResumableUploadParamsextractGeminiFileUploadParamsThese dump paths, headers, metadata, full bodies, and entire responses. In production this:
- Bloats logs and can impact performance (especially marshaling the whole response).
- Risks leaking sensitive request/response content into logs.
If you need this for troubleshooting, please route it through the existing structured logger at a debug level and avoid logging entire bodies. Otherwise, these debug prints should be removed before merge.
Also applies to: 824-855, 1013-1063, 1066-1077, 1082-1109, 1118-1129
1011-1063: Avoid hardcodingPurpose: "batch"for all resumable uploads
parseGeminiResumableUploadPhase2unconditionally sets:uploadReq.Purpose = "batch"This is correct for batch-oriented uploads, but if this endpoint is ever reused for other Gemini file purposes (e.g., general context files, images, etc.), all such uploads will be mislabeled.
Better:
- Read the intended purpose from phase-1 metadata (or an HTTP header) and store it in the
uploadSession.- In phase 2, populate
uploadReq.Purposefrom the session, with a fallback default (e.g.,"batch") only if nothing was provided.- Optionally validate the purpose against allowed values before propagating.
That keeps the handler forward-compatible with non-batch upload flows.
45-49: Handlecrypto/rand.Readerrors when generating session IDs
generateSessionIDignores the error fromrand.Read, which is undesirable for identifiers that may be relied on for unpredictability. Ifrand.Readever fails, you currently hex-encode zeroed bytes.Consider:
func generateSessionID() string { bytes := make([]byte, 16) if _, err := rand.Read(bytes); err != nil { // Fail fast or fall back to a clearly non-cryptographic ID // (and ideally log this as a critical error). return fmt.Sprintf("%d-%d", time.Now().UnixNano(), len(bytes)) } return hex.EncodeToString(bytes) }and adjust call sites if you choose to propagate an error instead of silently falling back.
463-482: ValidateSrctype and elements to avoid silent misconfigurationIn the batch-create converter, the type switch on
sdkReq.Srconly handlesstringand[]interface{}and silently ignores any other type; non-map items within the array are also skipped without error. This can lead to confusing downstream "either input_file_id or requests must be provided" errors instead of a clear SDK-level validation message.Consider:
- Adding a
defaultcase that returns a 400-style error like:"src must be a string (file reference) or array of request objects".- Within the
[]interface{}case, returning an error if any element is not amap[string]interface{}instead of silently skipping it.Example:
switch src := sdkReq.Src.(type) { case string: bifrostReq.InputFileID = strings.TrimPrefix(src, "files/") case []interface{}: requests := make([]schemas.BatchRequestItem, 0, len(src)) for i, item := range src { itemMap, ok := item.(map[string]interface{}) if !ok { return nil, fmt.Errorf("src[%d] must be an object, got %T", i, item) } customID := fmt.Sprintf("request-%d", i) requests = append(requests, schemas.BatchRequestItem{ CustomID: customID, Body: itemMap, }) } bifrostReq.Requests = requests default: return nil, fmt.Errorf("invalid src type: expected string or array, got %T", src) }This surfaces malformed SDK requests immediately at the HTTP layer.
1082-1112: RespectX-Forwarded-Proto/X-Forwarded-Hostwhen building resumable upload URLsIn
extractGeminiFileUploadParams, the phase-1 resumable init builds the phase-2 URL as:scheme := "http" if ctx.IsTLS() { scheme = "https" } host := string(ctx.Host()) uploadURL := fmt.Sprintf("%s://%s/genai/upload/v1beta/files/resumable/%s", scheme, host, uploadReq.ResumableSessionID)When Bifrost runs behind a reverse proxy,
ctx.IsTLS()andctx.Host()reflect the internal hop, not the client-facing scheme/host. Clients will then be given an internal URL that they cannot reach.Follow the pattern used elsewhere (e.g., session handling) and honor reverse-proxy headers:
- Prefer
X-Forwarded-Prototo determine scheme (httpsif it containshttps).- Prefer
X-Forwarded-Hostfor host if non-empty.- Fall back to
ctx.IsTLS()/ctx.Host()only when those headers are absent.Sketch:
scheme := "http" if xfProto := string(ctx.Request.Header.Peek("X-Forwarded-Proto")); strings.Contains(strings.ToLower(xfProto), "https") { scheme = "https" } else if ctx.IsTLS() { scheme = "https" } host := string(ctx.Request.Header.Peek("X-Forwarded-Host")) if host == "" { host = string(ctx.Host()) } uploadURL := fmt.Sprintf("%s://%s/genai/upload/v1beta/files/resumable/%s", scheme, host, uploadReq.ResumableSessionID)This makes resumable uploads work correctly when fronted by e.g. nginx or a load balancer.
Also applies to: 1138-1163
core/providers/gemini/batch.go (1)
899-919: Fix batch stats mapping to avoid negative or misleading countsIn
ToGeminiBatchRetrieveResponseandToGeminiBatchListResponse,BatchStatsis derived as:
RequestCount = resp.RequestCounts.TotalPendingRequestCount = resp.RequestCounts.Total - resp.RequestCounts.CompletedSuccessfulRequestCount = resp.RequestCounts.Completed - resp.RequestCounts.FailedHowever:
BatchRetrievepopulates onlyCompletedandFailed(leavingTotal == 0), soPendingRequestCountbecomes negative when any work has completed.- For list responses,
RequestCountsis left at zero, so all stats remain zero even when the job has completed.- In retrieve/list responses,
Completedalready represents successful requests, so subtractingFailedagain makesSuccessfulRequestCountwrong.Consider deriving stats defensively per response, e.g.:
- Treat
success := resp.RequestCounts.Completed.- Compute
total := resp.RequestCounts.Total; iftotal <= 0, settotal = success + resp.RequestCounts.Failed.- Derive
pending := total - (success + resp.RequestCounts.Failed)and clamp to>= 0.Then set:
BatchStats: &GeminiBatchStats{ RequestCount: total, PendingRequestCount: pending, SuccessfulRequestCount: success, }and mirror this pattern in both
ToGeminiBatchRetrieveResponseandToGeminiBatchListResponseso SDK invariants (RequestCount ≥ 0,PendingRequestCount ≥ 0,RequestCount ≈ success + failed + pending) always hold. The create-path converter can keep its current logic since it has a fully-populatedTotal.Also applies to: 941-958
🧹 Nitpick comments (1)
core/bifrost.go (1)
1133-1209: BatchDeleteRequest mirrors other batch flows and looks correctThe validation, provider/config resolution, base provider handling, key selection, and
executeRequestWithRetriesusage all match the patterns used byBatchRetrieveRequest,BatchCancelRequest, andBatchResultsRequest. ErrorExtraFieldspopulation forBatchDeleteRequestis also consistent.If you touch this area again, consider a small internal helper to DRY up the repeated “batch request” setup (provider/config/baseProvider/key selection) across the Batch* methods.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (24)
core/bifrost.go(1 hunks)core/providers/anthropic/batch.go(1 hunks)core/providers/azure/batch.go(1 hunks)core/providers/bedrock/batch.go(1 hunks)core/providers/cerebras/batch.go(1 hunks)core/providers/cohere/batch.go(1 hunks)core/providers/elevenlabs/batch.go(1 hunks)core/providers/gemini/batch.go(1 hunks)core/providers/gemini/files.go(1 hunks)core/providers/gemini/types.go(1 hunks)core/providers/groq/batch.go(1 hunks)core/providers/mistral/batch.go(1 hunks)core/providers/ollama/batch.go(1 hunks)core/providers/openrouter/batch.go(1 hunks)core/providers/parasail/batch.go(1 hunks)core/providers/perplexity/batch.go(1 hunks)core/providers/sgl/batch.go(1 hunks)core/providers/vertex/batch.go(1 hunks)core/schemas/batch.go(1 hunks)core/schemas/bifrost.go(1 hunks)core/schemas/provider.go(3 hunks)transports/bifrost-http/integrations/genai.go(14 hunks)transports/bifrost-http/integrations/router.go(9 hunks)transports/bifrost-http/integrations/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (11)
- core/providers/bedrock/batch.go
- core/schemas/bifrost.go
- core/providers/parasail/batch.go
- core/providers/perplexity/batch.go
- transports/bifrost-http/integrations/utils.go
- core/providers/anthropic/batch.go
- core/schemas/batch.go
- core/providers/sgl/batch.go
- core/providers/cohere/batch.go
- core/schemas/provider.go
- core/providers/openrouter/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/providers/mistral/batch.gocore/providers/elevenlabs/batch.gocore/providers/azure/batch.gocore/providers/vertex/batch.gocore/providers/ollama/batch.gocore/bifrost.gocore/providers/gemini/files.gocore/providers/groq/batch.gocore/providers/gemini/batch.gocore/providers/cerebras/batch.gotransports/bifrost-http/integrations/genai.gocore/providers/gemini/types.gotransports/bifrost-http/integrations/router.go
🧬 Code graph analysis (8)
core/providers/elevenlabs/batch.go (3)
core/providers/elevenlabs/elevenlabs.go (1)
ElevenlabsProvider(23-29)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)
core/providers/vertex/batch.go (4)
core/providers/vertex/vertex.go (1)
VertexProvider(57-62)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/ollama/batch.go (4)
core/providers/ollama/ollama.go (1)
OllamaProvider(18-23)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/bifrost.go (3)
core/schemas/bifrost.go (5)
BatchDeleteRequest(103-103)BifrostError(364-373)ErrorField(382-389)BifrostErrorExtraFields(431-435)RequestType(83-83)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/provider.go (1)
Provider(313-362)
core/providers/gemini/files.go (1)
core/schemas/files.go (1)
BifrostFileUploadResponse(89-105)
core/providers/cerebras/batch.go (4)
core/providers/cerebras/cerebras.go (1)
CerebrasProvider(16-21)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
transports/bifrost-http/integrations/genai.go (4)
core/schemas/provider.go (1)
Provider(313-362)core/providers/gemini/types.go (2)
GeminiFileUploadRequest(1656-1662)Type(779-779)core/schemas/files.go (5)
BifrostFileUploadRequest(53-66)BifrostFileUploadResponse(89-105)BifrostFileListRequest(108-124)BifrostFileRetrieveRequest(137-146)BifrostFileDeleteRequest(168-177)core/schemas/batch.go (11)
BifrostBatchCreateRequest(65-82)BatchRequestItem(31-37)BifrostBatchCreateResponse(85-109)BifrostBatchListRequest(112-126)BifrostBatchListResponse(129-140)BifrostBatchRetrieveRequest(143-149)BifrostBatchRetrieveResponse(152-187)BifrostBatchCancelRequest(190-196)BifrostBatchCancelResponse(199-208)BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)
transports/bifrost-http/integrations/router.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)transports/bifrost-http/integrations/genai.go (1)
ErrResumableUploadInit(36-36)core/schemas/provider.go (1)
Provider(313-362)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
🔇 Additional comments (15)
core/providers/cerebras/batch.go (1)
35-38: BatchDelete stub correctly mirrors other unsupported Cerebras batch opsThe new
BatchDeletemethod cleanly follows the existing pattern of returningNewUnsupportedOperationErrorwith the rightRequestTypeand provider key; signature and behavior are consistent with the other Cerebras batch methods and with the core BatchDelete schema.core/providers/elevenlabs/batch.go (1)
35-38: LGTM!The implementation correctly follows the established pattern for unsupported batch operations in the ElevenLabs provider. The method signature, error handling, and comment are all consistent with the other batch method stubs in this file.
core/providers/vertex/batch.go (1)
35-38: Vertex BatchDelete stub correctly exposes unsupported operationThe method signature and use of
NewUnsupportedOperationErrorare consistent with the other Vertex batch methods and the provider interface; no issues here.core/providers/groq/batch.go (1)
35-38: Groq BatchDelete stub is consistent with existing batch surfaceThe stub correctly matches the Provider interface and mirrors the existing unsupported Batch* methods; behavior is clear and predictable.
core/providers/ollama/batch.go (1)
35-38: Ollama BatchDelete stub matches other unsupported batch methodsSignature and unsupported error construction are correct and aligned with the rest of the provider.
core/providers/mistral/batch.go (1)
35-38: Mistral BatchDelete stub is wired correctlyImplements the BatchDelete method with the expected signature and unsupported error; consistent with other batch operations.
core/providers/azure/batch.go (1)
438-441: Azure BatchDelete stub correctly advertises unsupported behaviorMethod signature matches the Provider interface and returns
NewUnsupportedOperationErrorwithBatchDeleteRequest, which is consistent with other unsupported operations.transports/bifrost-http/integrations/router.go (4)
81-89: BatchRequest extended cleanly for delete supportAdding
DeleteRequest *schemas.BifrostBatchDeleteRequestkeeps the struct symmetric with other batch operations and is compatible with the existingTypeswitch inhandleBatchRequest.
159-162: BatchDeleteResponseConverter type aligns with existing converter patternThe new converter type matches the style of other Batch* converters and keeps integration-specific formatting concerns encapsulated.
277-291: RouteConfig BatchDeleteResponseConverter hook is consistentExtending
RouteConfigwithBatchDeleteResponseConverterfollows the same pattern as the other Batch* converters and gives integrations explicit control over delete responses.
822-843: BatchDelete handling matches other batch operationsThe
BatchDeleteRequestbranch validatesDeleteRequest, callsBatchDeleteRequeston the client, runsPostCallback, and then appliesBatchDeleteResponseConverterwhen present, falling back to the raw response otherwise. This is consistent with the patterns used for create/list/retrieve/cancel/results.core/providers/gemini/batch.go (2)
847-868: Job state mapping looks consistent with SDK enumsThe
ToGeminiJobStatemapping from internalBatchStatusto SDK job states is coherent (including pending/finalizing -> RUNNING, expired -> FAILED) and should interop cleanly with the GenAI SDK expectations.
986-1043: BatchDelete implementation is straightforward and aligns with HTTP semanticsThe Gemini
BatchDeleteprovider:
- Validates
batch_idand operation-allowance.- Normalizes IDs to
"batches/...".- Accepts both
200 OKand204 No Contentas success.- Returns a simple
BifrostBatchDeleteResponsewith latency metadata.This is a reasonable first cut; any future change to treat
404as an idempotent success (instead of error) can be handled as a higher-level API decision.transports/bifrost-http/integrations/genai.go (1)
438-661: Batch route wiring and router integration look coherentThe new
CreateGenAIBatchRouteConfigswiring for create/list/retrieve/cancel/delete:
- Correctly maps SDK request structs to Bifrost
BifrostBatch*requests.- Uses PreCallbacks to inject provider, model, and normalized batch IDs.
- Honors raw
ExtraFields.RawResponsepassthrough when present.- Is integrated into
NewGenAIRouteralongside existing chat and file routes.From a routing and plumbing perspective, the shapes and paths (
/v1beta/models/{model}:batchGenerateContent,/v1beta/batches/...) look consistent with the rest of the GenAI surface.Also applies to: 1248-1255
core/providers/gemini/types.go (1)
1578-1651: SDK batch/job types and updated file upload request look consistentThe additions:
GeminiJobState*constants for SDK job states.- SDK-facing batch types (
GeminiBatchCreateRequestSDK,GeminiBatchInlineRequest,GeminiBatchJobResponseSDK, list/retrieve/cancel/delete request/response structs).- Updated
GeminiFileUploadRequest(File, Filename, MimeType, Purpose, ResumableSessionID).are all structurally sound and align with the GenAI batch and files APIs. JSON field names match the documented snake_case/camelCase expectations, and the file-upload struct now fits the new resumable-upload flow in
genai.gowithout leaking coreschemasinto this package.Also applies to: 1655-1662
d69d5e9 to
08f19cc
Compare
2ea6dcc to
84119ca
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (13)
core/providers/gemini/files.go (1)
465-485: Gemini upload wrapper implementation is correct; update misleading snake_case commentThe new
GeminiFileUploadResponseWrapperandToGeminiFileUploadResponsecorrectly produce a{ "file": { ... } }response using the existingGeminiFileResponseshape. However, the// Uses snake_case field names to match Google's API format.comment is now inaccurate since the JSON tags onGeminiFileResponseare camelCase.Consider updating the comment to describe the top‑level
"file"envelope and camelCase field naming instead, e.g.:-// Uses snake_case field names to match Google's API format. -// GeminiFileUploadResponseWrapper is a wrapper that contains the file response for the upload API. +// GeminiFileUploadResponseWrapper wraps the Gemini file object under a top-level "file" key +// using Gemini's camelCase JSON field names as expected by the Gemini Files API.transports/bifrost-http/integrations/router.go (2)
870-895: Removefmt.Printfdebug logging from file handling and use structured logger if neededThe file upload path still uses
fmt.Printffor ad‑hoc debug logging, and there’s a similar print in the file response error path:
- Lines 870, 877, 881, 883, 885, 889, 892, 894
- Line 1012
These send debug output to stdout and bypass the router’s structured logger.
Recommend removing them (or replacing with
g.logger.Debug(...)if you really need logs):- fmt.Printf("[DEBUG] router: calling FileUploadRequest for provider=%s, purpose=%s, filename=%s\n", fileReq.UploadRequest.Provider, fileReq.UploadRequest.Purpose, fileReq.UploadRequest.Filename) @@ - errMsg := "unknown error" - if bifrostErr.Error != nil { - errMsg = bifrostErr.Error.Message - } - fmt.Printf("[DEBUG] router: FileUploadRequest error: %s (provider=%s)\n", errMsg, fileReq.UploadRequest.Provider) @@ - fmt.Printf("[DEBUG] router: FileUploadRequest success, response ID=%s\n", fileResponse.ID) @@ - fmt.Printf("[DEBUG] router: calling PostCallback\n") @@ - fmt.Printf("[DEBUG] router: PostCallback error: %v\n", err) @@ - fmt.Printf("[DEBUG] router: PostCallback success\n") @@ - fmt.Printf("[DEBUG] router: calling FileUploadResponseConverter\n") @@ - fmt.Printf("[DEBUG] router: FileUploadResponseConverter done, err=%v\n", err) @@ - if err != nil { - fmt.Printf("[DEBUG] router: file response conversion error: %v\n", err) + if err != nil { g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to convert file response"))Also applies to: 1012-1013
423-431: Ensurecancel()is called on PreCallback errors (including resumable upload init)When
config.PreCallbackreturns an error, the handler returns without ever callingcancel()on the Bifrost context. This includes theErrResumableUploadInitpath where the PreCallback has already written the response.To avoid leaking the cancellable context (and to keep semantics consistent across all early returns), call
cancel()before returning:if config.PreCallback != nil { - if err := config.PreCallback(ctx, bifrostCtx, req); err != nil { - // Check if this is a resumable upload init that was already handled - if err == ErrResumableUploadInit { - // Response was already written by the PreCallback, just return - return - } - g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute pre-request callback: "+err.Error())) - return - } + if err := config.PreCallback(ctx, bifrostCtx, req); err != nil { + // Always cancel the bifrost context on pre-callback errors + cancel() + // Check if this is a resumable upload init that was already handled + if err == ErrResumableUploadInit { + // Response was already written by the PreCallback; just return + return + } + g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute pre-request callback: "+err.Error())) + return + } }transports/bifrost-http/integrations/genai.go (8)
1066-1079: setResumableUploadFinalStatus header behavior is fine; drop heavy debug loggingThe helper correctly sets
X-Goog-Upload-Status: finalon phase‑2 responses. The additional logging:
- Marshals the entire response to JSON.
- Prints response body and status via
fmt.Printf.This is overkill for production and can be noisy and expensive.
Consider trimming to just setting the header (and, if needed, a lightweight structured debug log elsewhere), e.g.:
func setResumableUploadFinalStatus(ctx *fasthttp.RequestCtx, req interface{}, resp interface{}) error { // Set the upload status to final to signal completion of resumable upload ctx.Response.Header.Set("X-Goog-Upload-Status", "final") - - // Log the response for debugging - respJSON, _ := sonic.Marshal(resp) - fmt.Printf("[DEBUG] setResumableUploadFinalStatus: set X-Goog-Upload-Status=final, response body=%s\n", string(respJSON)) - - // Also log the full response headers for debugging - fmt.Printf("[DEBUG] setResumableUploadFinalStatus: status code=%d\n", ctx.Response.StatusCode()) - return nil }
1081-1113: Avoid mutatinguploadSessionafter storing insync.Mapand remove debug
extractGeminiResumableUploadParamscorrectly:
- Reads provider and filename from the session.
- Deletes the session entry when done.
- Stores provider and original filename in the Bifrost context.
Two follow‑ups:
- Past feedback noted the risk of mutating
uploadSessionwhile it’s stored insync.Map. Most writes have been removed from here, butextractGeminiFileUploadParamsstill mutatessession.Provider(see below).- The
[DEBUG]print here should be removed.You can leave this function mostly as‑is and just drop the debug log:
- fmt.Printf("[DEBUG] extractGeminiResumableUploadParams: sessionID=%s, provider=%s, filename=%s\n", sessionIDStr, provider, originalFilename) *bifrostCtx = context.WithValue(*bifrostCtx, bifrostContextKeyProvider, provider)The mutation fix is handled in the next comment on
extractGeminiFileUploadParams.
1247-1255: Reorder GenAI routes so batch endpoints are not shadowed by the generic model routeCurrent route registration order is:
routes := CreateGenAIRouteConfigs("/genai") routes = append(routes, CreateGenAIFileRouteConfigs("/genai", handlerStore)...) routes = append(routes, CreateGenAIBatchRouteConfigs("/genai", handlerStore)...)Because fasthttp/router uses first‑match‑wins semantics and the generic route is
/genai/v1beta/models/{model:*}, requests like:
POST /genai/v1beta/models/gemini-1.5-pro:batchGenerateContentwill match the generic model route (with
model="gemini-1.5-pro:batchGenerateContent") instead of your new batch route/genai/v1beta/models/{model}:batchGenerateContent.Batch endpoints will therefore never be hit.
Fix by registering batch routes before the generic catch‑all:
func NewGenAIRouter(client *bifrost.Bifrost, handlerStore lib.HandlerStore, logger schemas.Logger) *GenAIRouter { - routes := CreateGenAIRouteConfigs("/genai") - routes = append(routes, CreateGenAIFileRouteConfigs("/genai", handlerStore)...) - routes = append(routes, CreateGenAIBatchRouteConfigs("/genai", handlerStore)...) + // Order matters: batch routes must come before the generic model route + routes := CreateGenAIBatchRouteConfigs("/genai", handlerStore) + routes = append(routes, CreateGenAIFileRouteConfigs("/genai", handlerStore)...) + routes = append(routes, CreateGenAIRouteConfigs("/genai")...)
23-37: Tighten session ID generation and consider cleanup shutdown hookTwo points in the new resumable upload session infra:
generateSessionIDignores errors fromcrypto/rand.Read, which is a security footgun if the RNG fails; IDs could become predictable.- The cleanup goroutine runs forever with no way to stop it on process shutdown (less critical, but worth noting for graceful exits).
You can keep a simple API while handling the RNG error:
func generateSessionID() string { bytes := make([]byte, 16) - rand.Read(bytes) - return hex.EncodeToString(bytes) + if _, err := rand.Read(bytes); err != nil { + // Extremely unlikely; fall back to a time-based ID instead of silently weakening randomness + now := time.Now().UnixNano() + return fmt.Sprintf("%d-%d", now, now) + } + return hex.EncodeToString(bytes) }Optionally, expose a stop mechanism for the cleanup goroutine (e.g., a
cleanupStopchannel and aShutdownGenAIUploads()hook) so the main process can terminate it cleanly when shutting down.Also applies to: 45-49, 52-67
438-661: Improve Gemini batch route handling: addSrcdefault case and keep configs as-isThe new batch route configs (
CreateGenAIBatchRouteConfigsand helpers) are well‑structured and map Gemini SDK types to Bifrost batch requests cleanly. One gap is theSrctype switch in the batch create converter:switch src := sdkReq.Src.(type) { case string: // file-based case []interface{}: // inline }If
Srcis neitherstringnor[]interface{}(e.g., malformed input or SDK change), the converter silently leaves bothInputFileIDandRequestsunset, leading to confusing downstream behavior.Consider failing fast with a descriptive error:
@@ func CreateGenAIBatchRouteConfigs(pathPrefix string, handlerStore lib.HandlerStore) []RouteConfig { - switch src := sdkReq.Src.(type) { + switch src := sdkReq.Src.(type) { case string: @@ case []interface{}: @@ bifrostReq.Requests = requests + default: + return nil, fmt.Errorf("invalid src type: expected string or array, got %T", src) }Everything else in these batch route configs (provider extraction, path parameters, converters) looks good.
Also applies to: 663-778
780-806: Fix multipart/related parsing to avoid binary corruption and drop debug prints
parseGeminiMultipartRelatedcurrently does:
strings.Split(string(body), delimiter)on the entire request body.- Interprets each part as a string and later converts the file part back to
[]byte(content).Converting arbitrary binary data to
stringand back can corrupt content (UTF‑8 expectations, embedded NULs, etc.). Also,parseGeminiFileUploadRequest/parseGeminiResumableUploademit a lot offmt.Printfdebug logs.Recommend:
- Operate on
[]bytethroughout for multipart/related:+import ( + "bytes" + // ... +) @@ func parseGeminiMultipartRelated(ctx *fasthttp.RequestCtx, uploadReq *gemini.GeminiFileUploadRequest, body []byte, contentType string) error { - // Split body by boundary - delimiter := "--" + boundary - parts := strings.Split(string(body), delimiter) - - for _, part := range parts { - part = strings.TrimSpace(part) - if part == "" || part == "--" { + // Split body by boundary using bytes to preserve binary data + delimiter := []byte("--" + boundary) + parts := bytes.Split(body, delimiter) + + for _, part := range parts { + part = bytes.TrimSpace(part) + if len(part) == 0 || bytes.Equal(part, []byte("--")) { continue } - // Split headers from content - headerEnd := strings.Index(part, "\r\n\r\n") - if headerEnd == -1 { - headerEnd = strings.Index(part, "\n\n") + // Split headers from content + sep := []byte("\r\n\r\n") + headerEnd := bytes.Index(part, sep) + if headerEnd == -1 { + sep = []byte("\n\n") + headerEnd = bytes.Index(part, sep) if headerEnd == -1 { continue } } - headers := part[:headerEnd] - content := part[headerEnd:] - content = strings.TrimPrefix(content, "\r\n\r\n") - content = strings.TrimPrefix(content, "\n\n") + headers := string(part[:headerEnd]) + content := part[headerEnd+len(sep):] @@ - if strings.Contains(headersLower, "application/json") { + if strings.Contains(headersLower, "application/json") { @@ - if err := sonic.Unmarshal([]byte(content), &metadata); err == nil { + if err := sonic.Unmarshal(content, &metadata); err == nil { @@ - } else { - // This is file content - uploadReq.File = []byte(content) + } else { + // This is file content; copy to avoid holding the whole body slice + uploadReq.File = append([]byte(nil), content...) }
- Remove the
[DEBUG]fmt.Printfcalls inparseGeminiFileUploadRequestandparseGeminiResumableUpload(or replace with structured logging if absolutely needed), to avoid noisy stdout in production.Also applies to: 808-865, 867-934
1011-1064: Resumable phase‑2 handler is correct, but purpose is hardcoded to"batch"
parseGeminiResumableUploadPhase2correctly:
- Looks up the session by
{session_id}.- Copies the raw body into
uploadReq.File.- Rehydrates filename and MIME type from the stored session.
Two minor considerations:
uploadReq.Purposeis hardcoded to"batch". If this resumable path is ever reused for non‑batch uploads, consider deriving purpose from phase‑1 metadata or an extra header instead of forcing"batch".- Debug
fmt.Printfcalls here should be removed or replaced with structured logging.For example:
- uploadReq.Purpose = "batch" // Default purpose for file uploads via GenAI API + // Default purpose for GenAI batch uploads. If you later support other + // purposes via resumable uploads, plumb them through phase-1 metadata. + uploadReq.Purpose = "batch" @@ -fmt.Printf("[DEBUG] parseGeminiResumableUploadPhase2: called, path=%s\n", string(ctx.Path())) -... -fmt.Printf("[DEBUG] parseGeminiResumableUploadPhase2: sessionID from path=%v\n", sessionID) -... -fmt.Printf("[DEBUG] parseGeminiResumableUploadPhase2: session found=%v\n", ok) -... -fmt.Printf("[DEBUG] parseGeminiResumableUploadPhase2: bodyLen=%d, filename=%s, provider=%s\n", len(body), session.Filename, session.Provider) -... -fmt.Printf("[DEBUG] parseGeminiResumableUploadPhase2: successfully prepared upload request\n")
1115-1163: Fix sync.Map mutation in resumable init and support reverse‑proxy headers for upload URLIn
extractGeminiFileUploadParams:
- You (correctly) detect resumable‑init requests via
uploadReq.ResumableSessionIDand write back the phase‑2 URL.- But you mutate
session.Providerin‑place on a*uploadSessionstored in async.Map, which can race with the cleanup goroutine reading the same struct.- Upload URL construction uses
ctx.Host()andctx.IsTLS()only, ignoringX-Forwarded-Proto/X-Forwarded-Host, which breaks behind reverse proxies/ingress.Suggested improvements:
- Treat
uploadSessionas immutable; copy‑on‑write when settingProvider:- if sessionVal, ok := uploadSessions.Load(uploadReq.ResumableSessionID); ok { - if session, ok := sessionVal.(*uploadSession); ok { - session.Provider = schemas.ModelProvider(provider) - } - } + if sessionVal, ok := uploadSessions.Load(uploadReq.ResumableSessionID); ok { + if session, ok := sessionVal.(*uploadSession); ok { + updated := *session + updated.Provider = schemas.ModelProvider(provider) + uploadSessions.Store(uploadReq.ResumableSessionID, &updated) + } + }
- Honor reverse‑proxy headers when building the upload URL:
- // Build the upload URL for phase 2 - // Use the request's host and scheme to build the URL - scheme := "http" - if ctx.IsTLS() { - scheme = "https" - } - host := string(ctx.Host()) + // Build the upload URL for phase 2. + // Prefer X-Forwarded-Proto/Host so it works correctly behind proxies. + scheme := "http" + if xfProto := strings.ToLower(string(ctx.Request.Header.Peek("X-Forwarded-Proto"))); xfProto != "" { + if strings.Contains(xfProto, "https") { + scheme = "https" + } + } else if ctx.IsTLS() { + scheme = "https" + } + + host := string(ctx.Request.Header.Peek("X-Forwarded-Host")) + if host == "" { + host = string(ctx.Host()) + } uploadURL := fmt.Sprintf("%s://%s/genai/upload/v1beta/files/resumable/%s", scheme, host, uploadReq.ResumableSessionID) @@ - fmt.Printf("[DEBUG] extractGeminiFileUploadParams: provider=%s\n", provider) @@ - fmt.Printf("[DEBUG] extractGeminiFileUploadParams: resumableSessionID=%s, fileLen=%d\n", uploadReq.ResumableSessionID, len(uploadReq.File)) @@ - fmt.Printf("[DEBUG] extractGeminiFileUploadParams: returning upload URL=%s\n", uploadURL)This removes the data race, makes resumable URLs reverse‑proxy aware, and drops stdout debug noise.
core/providers/gemini/batch.go (1)
847-896: Fix Gemini BatchStats math to avoid negative or inconsistent countsCurrent stats computation can still produce invalid values:
RequestCountusesresp.RequestCounts.Totaldirectly, which is often left at its zero value in upstream code (e.g.,BatchRetrieveonly setsCompletedandFailed).PendingRequestCount = Total - CompletedandSuccessfulRequestCount = Completed - Failedcan become negative whenTotal == 0butCompleted/Failed > 0, or whenCompleted < Failed.This breaks invariants like “counts are non‑negative” and “total ≈ success + failures + pending”.
Consider deriving stats defensively in all three converters (
ToGeminiBatchJobResponse,ToGeminiBatchRetrieveResponse,ToGeminiBatchListResponse), e.g.:func ToGeminiBatchJobResponse(resp *schemas.BifrostBatchCreateResponse) *GeminiBatchJobResponseSDK { @@ - if resp.CreatedAt > 0 { - result.Metadata = &GeminiBatchMetadata{ - Name: resp.ID, - State: ToGeminiJobState(resp.Status), - CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339), - BatchStats: &GeminiBatchStats{ - RequestCount: resp.RequestCounts.Total, - PendingRequestCount: resp.RequestCounts.Total - resp.RequestCounts.Completed, - SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed, - }, - } - } + if resp.CreatedAt > 0 { + total := resp.RequestCounts.Total + if total == 0 { + total = resp.RequestCounts.Completed + resp.RequestCounts.Failed + resp.RequestCounts.Pending + } + + success := resp.RequestCounts.Completed + if success < 0 { + success = 0 + } + + pending := total - (success + resp.RequestCounts.Failed) + if pending < 0 { + pending = 0 + } + + result.Metadata = &GeminiBatchMetadata{ + Name: resp.ID, + State: ToGeminiJobState(resp.Status), + CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339), + BatchStats: &GeminiBatchStats{ + RequestCount: total, + PendingRequestCount: pending, + SuccessfulRequestCount: success, + }, + } + }and mirror the same
total/success/pendingderivation insideToGeminiBatchRetrieveResponseandToGeminiBatchListResponseinstead of directly usingresp.RequestCounts.Totaland simple differences. This guarantees non‑negative counts and consistent totals.Also applies to: 899-933, 936-972
core/providers/gemini/types.go (1)
1591-1651:Src interface{}relies on downstream validation (already flagged earlier).
GeminiBatchCreateRequestSDK.Srcbeinginterface{}is fine as a thin SDK-shape mirror, but it does mean all safety/correctness depends on the HTTP/translation layer doing strict type checks (string vs array of inline requests) and returning clear 400s for anything else. That concern has already been raised on the transport layer; types here look consistent with that design.
🧹 Nitpick comments (1)
core/bifrost.go (1)
1249-1325: BatchDeleteRequest wiring correctly mirrors existing Batch flows (optional dedup opportunity)*The new
BatchDeleteRequestmethod follows the same pattern asBatchCancelRequest/BatchResultsRequest: input validation, provider/config lookup, base-provider resolution, conditional key selection, andexecuteRequestWithRetriesaroundprovider.BatchDelete. ErrorExtraFieldsare set withRequestType: BatchDeleteRequestandProvider, which is consistent with the rest of the API.The only notable point is that this introduces one more copy of the common “provider/config/baseProvider/key selection” boilerplate already present in the other Batch* methods. If this area sees more changes later, consider extracting a small internal helper to reduce repetition and keep the batch operations easier to evolve.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (24)
core/bifrost.go(1 hunks)core/providers/anthropic/batch.go(1 hunks)core/providers/azure/batch.go(1 hunks)core/providers/bedrock/batch.go(1 hunks)core/providers/cerebras/batch.go(1 hunks)core/providers/cohere/batch.go(1 hunks)core/providers/elevenlabs/batch.go(1 hunks)core/providers/gemini/batch.go(1 hunks)core/providers/gemini/files.go(1 hunks)core/providers/gemini/types.go(1 hunks)core/providers/groq/batch.go(1 hunks)core/providers/mistral/batch.go(1 hunks)core/providers/ollama/batch.go(1 hunks)core/providers/openrouter/batch.go(1 hunks)core/providers/parasail/batch.go(1 hunks)core/providers/perplexity/batch.go(1 hunks)core/providers/sgl/batch.go(1 hunks)core/providers/vertex/batch.go(1 hunks)core/schemas/batch.go(1 hunks)core/schemas/bifrost.go(1 hunks)core/schemas/provider.go(3 hunks)transports/bifrost-http/integrations/genai.go(14 hunks)transports/bifrost-http/integrations/router.go(9 hunks)transports/bifrost-http/integrations/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (7)
- core/providers/groq/batch.go
- core/providers/parasail/batch.go
- core/providers/cerebras/batch.go
- core/providers/vertex/batch.go
- transports/bifrost-http/integrations/utils.go
- core/providers/azure/batch.go
- core/providers/sgl/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/schemas/bifrost.gocore/providers/ollama/batch.gocore/providers/elevenlabs/batch.gocore/providers/mistral/batch.gocore/schemas/batch.gocore/schemas/provider.gocore/providers/gemini/batch.gocore/providers/openrouter/batch.gocore/providers/anthropic/batch.gocore/providers/bedrock/batch.gotransports/bifrost-http/integrations/genai.gotransports/bifrost-http/integrations/router.gocore/providers/perplexity/batch.gocore/providers/cohere/batch.gocore/bifrost.gocore/providers/gemini/types.gocore/providers/gemini/files.go
🧠 Learnings (1)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.
Applied to files:
core/schemas/bifrost.gocore/providers/ollama/batch.gocore/providers/elevenlabs/batch.gocore/providers/mistral/batch.gocore/schemas/batch.gocore/schemas/provider.gocore/providers/gemini/batch.gocore/providers/openrouter/batch.gocore/providers/anthropic/batch.gocore/providers/bedrock/batch.gotransports/bifrost-http/integrations/genai.gotransports/bifrost-http/integrations/router.gocore/providers/perplexity/batch.gocore/providers/cohere/batch.gocore/bifrost.gocore/providers/gemini/types.gocore/providers/gemini/files.go
🧬 Code graph analysis (9)
core/schemas/bifrost.go (1)
ui/lib/types/config.ts (1)
RequestType(122-134)
core/providers/ollama/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/elevenlabs/batch.go (5)
core/providers/elevenlabs/elevenlabs.go (1)
ElevenlabsProvider(23-29)core/schemas/account.go (1)
Key(8-17)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/schemas/batch.go (2)
core/schemas/provider.go (1)
Provider(313-362)core/schemas/bifrost.go (2)
ModelProvider(32-32)BifrostResponseExtraFields(295-304)
core/schemas/provider.go (2)
core/schemas/bifrost.go (1)
BatchDeleteRequest(103-103)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)
core/providers/bedrock/batch.go (4)
core/providers/bedrock/bedrock.go (1)
BedrockProvider(28-34)core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
transports/bifrost-http/integrations/router.go (4)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)transports/bifrost-http/integrations/genai.go (1)
ErrResumableUploadInit(36-36)core/schemas/bifrost.go (2)
BatchDeleteRequest(103-103)FileUploadRequest(104-104)core/schemas/provider.go (1)
Provider(313-362)
core/providers/perplexity/batch.go (3)
core/schemas/batch.go (2)
BifrostBatchDeleteRequest(266-272)BifrostBatchDeleteResponse(275-281)core/schemas/bifrost.go (2)
BifrostError(364-373)BatchDeleteRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(391-402)
core/providers/gemini/files.go (1)
core/schemas/files.go (1)
BifrostFileUploadResponse(89-105)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
🔇 Additional comments (17)
core/schemas/bifrost.go (1)
85-109: BatchDeleteRequest RequestType addition is consistent with existing Batch operations*The new
BatchDeleteRequestenum value fits cleanly alongside the other batch request types and provides the necessary discriminator for the new batch delete flow. No issues from this file’s side.core/providers/bedrock/batch.go (1)
1063-1067: Bedrock BatchDelete stub correctly advertises unsupported operationThe new
BatchDeletemethod cleanly returnsNewUnsupportedOperationErrorwithschemas.BatchDeleteRequest, matching the pattern used for other unsupported batch operations. Signature and behavior are consistent with the provider interface.core/schemas/batch.go (1)
265-281: Batch delete schemas align with existing batch request/response patterns
BifrostBatchDeleteRequestandBifrostBatchDeleteResponsefollow the same conventions as the other batch types (provider + ID + ExtraParams on the request, ID/Object/Deleted + ExtraFields on the response). This should integrate smoothly with the new BatchDelete flow.core/providers/cohere/batch.go (1)
35-38: Cohere BatchDelete stub is consistent with other unsupported batch operationsThe added
BatchDeletemethod correctly reports the operation as unsupported usingNewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey()), matching the existing Cohere batch stubs.core/providers/elevenlabs/batch.go (1)
35-38: Elevenlabs BatchDelete stub correctly signals unsupported operationThe
BatchDeletemethod follows the same unsupported-operation pattern as the other Elevenlabs batch methods and uses the correctBatchDeleteRequestdiscriminator.core/providers/mistral/batch.go (1)
35-38: Mistral BatchDelete stub matches existing unsupported batch behaviorThe added
BatchDeletemethod cleanly returnsNewUnsupportedOperationErrorforBatchDeleteRequest, consistent with the other Mistral batch stubs.core/providers/ollama/batch.go (1)
35-38: Ollama BatchDelete stub is consistent with existing unsupported Batch operationsThe method cleanly exposes BatchDelete while correctly signaling it as unsupported via
NewUnsupportedOperationError; aligns with other Batch* stubs in this provider.core/schemas/provider.go (1)
182-188: BatchDelete wiring in schemas and Provider interface looks correct
AllowedRequests.BatchDelete, itsIsOperationAllowedcase, and the newProvider.BatchDeletemethod are consistent with existing batch/file operations and preserve the “nil means allow all” behavior.Also applies to: 231-233, 350-351
core/providers/perplexity/batch.go (1)
35-38: Perplexity BatchDelete stub matches existing unsupported batch behaviorThe new
BatchDeletemethod correctly returnsNewUnsupportedOperationErrorand aligns with other batch stubs in this provider.core/providers/anthropic/batch.go (1)
793-796: Anthropic BatchDelete correctly exposed as unsupportedAdding
BatchDeletethat immediately returnsNewUnsupportedOperationErroris consistent with the rest of the Anthropic batch surface and avoids implying support that doesn’t exist.core/providers/gemini/batch.go (1)
986-1043: Gemini BatchDelete HTTP flow is correct and consistent with other batch methodsThe
BatchDeleteimplementation correctly:
- Checks operation allowance and validates
BatchID.- Builds the DELETE URL for both bare IDs and
batches/...forms.- Handles both
200 OKand204 No Contentas success.- Returns a
BifrostBatchDeleteResponsepopulated with provider, request type, and latency metadata.No changes needed here.
transports/bifrost-http/integrations/router.go (1)
80-89: BatchDelete integration in GenericRouter is coherent
BatchRequest.DeleteRequest,BatchDeleteResponseConverter, the newRouteConfig.BatchDeleteResponseConverterfield, and thehandleBatchRequestBatchDeleteRequestcase all follow the existing patterns for other batch operations and correctly delegate tog.client.BatchDeleteRequest.Also applies to: 159-162, 271-291, 704-855
core/providers/openrouter/batch.go (1)
35-38: OpenRouter BatchDelete stub follows established unsupported patternThe new
BatchDeletemethod is consistent with other unsupported batch operations and correctly usesNewUnsupportedOperationError.transports/bifrost-http/integrations/genai.go (3)
161-314: Gemini file route configs and converters are structurally soundThe new file routes (upload, resumable POST/PUT, list, retrieve, delete) correctly:
- Use
Gemini*request types at the edge and convert to BifrostFile*requests with provider resolution from context.- Respect raw provider responses via
ExtraFields.RawResponsewhen available, otherwise map to SDK-specific responses.- Wire appropriate
PreCallback/PostCallbackfor resumable uploads and metadata extraction.No functional issues spotted here; fixes below focus on helper implementations (binary handling, debug logging, etc.).
Also applies to: 316-435
937-982: Multipart/form-data and raw upload helpers look good
parseGeminiMultipartUploadandparseGeminiRawUploadcorrectly:
- Handle optional JSON
metadatafor displayName.- Read the uploaded file safely into memory and set
uploadReq.File.- Default the filename from either metadata or
Content-Disposition.No issues beyond the binary concerns already addressed for multipart/related.
Also applies to: 984-1009
1170-1193: Gemini file list/retrieve/delete parameter extractors are consistentThe extractors:
- Set provider from
x-model-providerwith sensible Gemini default.- Validate
file_idpresence and non‑emptiness.- Populate the corresponding Gemini SDK request structs.
These look correct and consistent with the rest of the integration.
Also applies to: 1195-1219, 1221-1244
core/providers/gemini/types.go (1)
1578-1589: Job state constants look correct and aligned with SDK surface.The
GeminiJobState*string constants cleanly separate SDK job states from internalGeminiBatchState*values; no issues from a correctness or maintainability standpoint.
| File []byte `json:"-"` // Raw file content (not serialized) | ||
| Filename string `json:"filename"` // Original filename | ||
| MimeType string `json:"mimeType"` // MIME type of the file | ||
| Purpose string `json:"purpose"` // Purpose of the file (e.g., "batch") | ||
| ResumableSessionID string `json:"-"` // Session ID for resumable uploads (internal use) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Audit of GeminiFileUploadRequest usage and field changes
echo "=== GeminiFileUploadRequest struct definition ==="
rg -A 20 "type GeminiFileUploadRequest struct" core/providers/gemini/types.go
echo -e "\n=== All usages of GeminiFileUploadRequest ==="
rg -n "GeminiFileUploadRequest" . --type go
echo -e "\n=== Check for Provider field references on GeminiFileUploadRequest ==="
rg -n "\.Provider\b" core/providers/gemini -B2 -A2 --type goRepository: maximhq/bifrost
Length of output: 5156
🏁 Script executed:
#!/bin/bash
# Examine the upload parsing functions to verify MimeType is set in all paths
echo "=== parseGeminiFileUploadRequest function ==="
rg -A 50 "^func parseGeminiFileUploadRequest" transports/bifrost-http/integrations/genai.go
echo -e "\n=== parseGeminiMultipartRelated - MimeType handling ==="
rg -A 80 "^func parseGeminiMultipartRelated" transports/bifrost-http/integrations/genai.go | head -100
echo -e "\n=== parseGeminiMultipartUpload - MimeType handling ==="
rg -A 50 "^func parseGeminiMultipartUpload" transports/bifrost-http/integrations/genai.go | head -70
echo -e "\n=== parseGeminiRawUpload - MimeType handling ==="
rg -A 30 "^func parseGeminiRawUpload" transports/bifrost-http/integrations/genai.go | head -50Repository: maximhq/bifrost
Length of output: 7333
🏁 Script executed:
#!/bin/bash
# Check if MimeType is being extracted from Content-Type headers and examine full parseGeminiResumableUpload
echo "=== Complete parseGeminiResumableUpload function ==="
rg -A 100 "^func parseGeminiResumableUpload" transports/bifrost-http/integrations/genai.go | head -130
echo -e "\n=== Look for Content-Type header usage in MimeType context ==="
rg -n "Content-Type|MimeType|mime_type" transports/bifrost-http/integrations/genai.go | head -40
echo -e "\n=== Check if there are any tests or comments about MimeType requirements ==="
rg -B5 -A5 "MimeType" transports/bifrost-http/integrations/genai.go | head -60Repository: maximhq/bifrost
Length of output: 7753
🏁 Script executed:
#!/bin/bash
# Check if Provider was previously on GeminiFileUploadRequest and look for Content-Type usage in raw upload
echo "=== Check git history for GeminiFileUploadRequest and Provider field ==="
git log -p --all -S "Provider" -- core/providers/gemini/types.go 2>/dev/null | grep -A20 -B5 "GeminiFileUploadRequest" | head -60
echo -e "\n=== Check if Content-Type header is available in parseGeminiRawUpload ==="
rg -B10 "func parseGeminiRawUpload" transports/bifrost-http/integrations/genai.go | tail -15
echo -e "\n=== Check parseGeminiMultipartUpload for metadata structure ==="
rg -A 35 "func parseGeminiMultipartUpload" transports/bifrost-http/integrations/genai.go | head -45Repository: maximhq/bifrost
Length of output: 1532
🏁 Script executed:
#!/bin/bash
# Check how GeminiFileUploadRequest is used after parsing and how it's sent to Gemini API
echo "=== Search for where GeminiFileUploadRequest is used after parsing ==="
rg -A 20 "geminiReq.*GeminiFileUploadRequest" transports/bifrost-http/integrations/genai.go | grep -A 20 "PreCallback\|PostCallback" | head -50
echo -e "\n=== Check how the request is serialized for Gemini API ==="
rg -B5 -A15 "func.*PreCallback" transports/bifrost-http/integrations/genai.go | grep -A 15 "GeminiFileUploadRequest" | head -40
echo -e "\n=== Look at file upload request handling in genai.go ==="
rg -n "File.*upload\|CreateFile" transports/bifrost-http/integrations/genai.go | head -20Repository: maximhq/bifrost
Length of output: 247
🏁 Script executed:
#!/bin/bash
# Get broader context around GeminiFileUploadRequest usages
echo "=== Context around line 171-180 ==="
sed -n '165,185p' transports/bifrost-http/integrations/genai.go
echo -e "\n=== Context around line 213-225 ==="
sed -n '208,230p' transports/bifrost-http/integrations/genai.go
echo -e "\n=== Context around line 1015-1030 ==="
sed -n '1010,1035p' transports/bifrost-http/integrations/genai.go
echo -e "\n=== Search for how files are actually sent to Gemini ==="
rg -n "files.create\|files\.create\|CreateFile" transports/bifrost-http/integrations/genai.goRepository: maximhq/bifrost
Length of output: 3014
🏁 Script executed:
#!/bin/bash
# Check BifrostFileUploadRequest struct definition and MimeType field
echo "=== Find BifrostFileUploadRequest struct ==="
rg -A 15 "type BifrostFileUploadRequest struct" .
echo -e "\n=== Check all FileRequestConverter functions for MimeType handling ==="
rg -B2 -A 15 "FileRequestConverter.*func" transports/bifrost-http/integrations/genai.go | grep -A 12 "BifrostFileUploadRequest"
echo -e "\n=== Check the complete conversion at lines 181-190 ==="
sed -n '175,195p' transports/bifrost-http/integrations/genai.goRepository: maximhq/bifrost
Length of output: 3416
🏁 Script executed:
#!/bin/bash
# Check if MimeType should be passed to Gemini API and how it's expected
echo "=== Search for Gemini file upload implementation/comments ==="
rg -B5 -A10 "files\.create\|Files\|upload.*gemini" core/providers/gemini --type go -i | head -50
echo -e "\n=== Check if ExtraParams is used for MimeType elsewhere ==="
rg -n "ExtraParams\|MIME\|mime_type" core/providers/gemini --type go | head -30
echo -e "\n=== Check if there are any TODO or FIXME comments about MimeType ==="
rg -B2 -A2 "TODO|FIXME" transports/bifrost-http/integrations/genai.go | grep -i "mime\|type\|upload" -B2 -A2Repository: maximhq/bifrost
Length of output: 251
Verify MimeType is captured and forwarded in all file upload paths.
The change is good for decoupling, but verification reveals gaps:
parseGeminiMultipartUploaddoes not extractmime_typefrom the metadata JSON (unlikeparseGeminiMultipartRelatedandparseGeminiResumableUpload).parseGeminiRawUploaddoes not attempt to extract MIME type from theContent-Typeheader.- Most critically, none of the
FileRequestConverterfunctions forwardMimeTypetoBifrostFileUploadRequest—it's parsed intoGeminiFileUploadRequestbut then discarded.
The Provider field was never on GeminiFileUploadRequest (it's internal to uploadSession), so no callers depend on removing it.
🤖 Prompt for AI Agents
In core/providers/gemini/types.go around lines 1657 to 1661, the MimeType parsed
into GeminiFileUploadRequest is not consistently extracted or forwarded: update
parseGeminiMultipartUpload to read "mime_type" from the metadata JSON like the
other parsers, update parseGeminiRawUpload to attempt to extract the MIME type
from the Content-Type header when present, and modify all FileRequestConverter
functions to copy GeminiFileUploadRequest.MimeType into the resulting
BifrostFileUploadRequest so the MIME type isn't discarded during conversion.

Summary
Add batch delete functionality to the Bifrost API, allowing users to delete batch jobs. This implementation primarily focuses on the Gemini provider while adding the necessary interfaces for other providers.
Changes
BatchDeleteRequestmethod to the Bifrost coreBatchDeleteinterface method to all providers (with unsupported operation responses for non-Gemini providers)Type of change
Affected areas
How to test
Test batch deletion with the Gemini provider:
Breaking changes
Related issues
Implements batch deletion functionality for better resource management.
Security considerations
No additional security implications beyond existing authentication mechanisms.
Checklist